Cyber Security

Cracking the Code: Navigating New SEC Rules Governing Cybersecurity Disclosure

Noah Schottenbauer, MJLST Staffer

In response to the dramatic impact cybersecurity incidents have on investors through the decline of stock value and sizeable costs to companies in rectifying breaches,  the SEC adopted new rules governing cybersecurity-related disclosures for public companies, covering both the disclosure of individual cybersecurity incidents as well as periodic disclosures of a company’s procedures to assess, identify, and manage material cybersecurity risks, management’s role in assessing and managing cybersecurity risks, and the board of directors’ oversight of cybersecurity risks.[1]

Before evaluating the specifics of the new SEC cybersecurity disclosure requirements, it is important to understand why information about cybersecurity incidents is important to investors. In recent years, data breaches have led to an average decline in stock value of 7.5% amongst publicly traded companies, with impacts being felt long after the date of the breach, as demonstrated by companies experiencing a significant data breach underperforming the NASDAQ by an average of 8.6% after one year.[2] One of the forces driving this decline in stock value is the immense costs associated with rectifying a data breach for the affected company. In 2022, the average cost of a data breach for U.S. companies was $9.44 million, drawn from ransom payments, disruptions in business operations, legal and audit fees, and other associated expenses.[3]

Summary Of Required Disclosures

  • Material Cybersecurity Incidents (Form 8-K, Item 1.05)

Amendments to Item 1.05 of Form 8-K require that reporting companies disclose any cybersecurity incident deemed to be material.[4] When making such disclosures, companies are required to “describe the material aspects of the nature, scope, and timing of the incident, and the material impact or reasonably likely material impact on the registrant, including its financial condition and results of operations.”[5]

So, what is a material cybersecurity incident? The SEC defines cybersecurity incident as “an unauthorized occurrence . . . on or conducted through a registrant’s information systems that jeopardizes the confidentiality, integrity, or availability of a registrant’s information systems or any information residing therein.”[6]

The definition of material, on the other hand, lacks the same degree of clarity. Based on context offered by the SEC through the rulemaking process, material is to be used in a way that is consistent with other securities laws.[7] Under this standard, information, or, in this case, a cybersecurity incident, would be considered material if “there is a substantial likelihood that a reasonable shareholder would consider it important.”[8] This determination is made based on a “delicate assessment of the inferences a ‘reasonable shareholder’ would draw from a given set of facts and the significance of those inferences to him.”[9] Even with this added context, what characteristics of a cybersecurity incident make it material remain unclear, but considering the fact that the rules are being implemented with the intent of protecting investor interests, the safest course of action would be to disclose a cybersecurity incident when in doubt of its materiality.[10]

It is important to note that this disclosure mandate is not limited to incidents that occur within the company’s own systems. If a material cybersecurity incident happens on third-party systems that a company utilizes, that too must be disclosed.[11] However, in these situations, companies are only expected to disclose information that is readily accessible, meaning they are not required to go beyond their “regular channels of communication” to gather pertinent information.[12]

Regarding the mechanics of the disclosure, the SEC stipulates that companies must file an Item 1.05 of Form 8-K within four business days of determining that a cybersecurity incident is material.[13] However, delaying disclosure may be allowed in limited circumstances where the United States Attorney General determines that immediate disclosure may seriously threaten national security or public safety.[14]

If there are any changes in the initially-disclosed information or if new material information is discovered that was not available at the time of the first disclosure, registrants are obligated to update their disclosure by filing an amended Form 8-K, ensuring that all relevant information related to the cybersecurity incident is available to the public and stakeholders.[15]

  • Risk Management & Strategy (Regulation S-K, Item 106(b))

Under amendments to Item 106(b) of Regulation S-K, reporting companies are obligated to describe their  “processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats in sufficient detail for a reasonable investor to understand those processes.”[16] When detailing these processes, companies must specifically address three primary points. First, they need to indicate how and if the cybersecurity processes described in Item 106(b) fall under the company’s overarching risk management system or procedures. Second, companies must clarify whether they involve assessors, consultants, auditors, or other third-party entities in relation to these cybersecurity processes. Third,  they must describe if they possess methods to monitor and access significant risks stemming from cybersecurity threats when availing the services of any third-party providers.[17]

In addition to the three enumerated elements under Item 106(b), companies are expected to furnish additional information to ensure a comprehensive understanding of their cybersecurity procedures for potential investors. This supplementary disclosure should encompass “whatever information is necessary, based on their facts and circumstances, for a reasonable investor to understand their cybersecurity processes.”[18] While companies are mandated to reveal if they collaborate with third-party service providers concerning their cybersecurity procedures, they are not required to disclose the specific names of these providers or offer a detailed description of the services these third-party entities provide, thus striking a balance between transparency and confidentiality and ensuring that investors have adequate information.[19]

  • Governance (Regulation S-K, Item 106(c))

Amendments to Regulation S-K, Item 106(c) require that companies: (1) describe the board’s oversight of the risks emanating from cybersecurity threats, and (2) characterize management’s role in both assessing and managing material risks arising from such threats.[20]

When detailing management’s role concerning these cybersecurity threats, there are a number of issues that should be addressed. First, companies should clarify which specific management positions or committees are entrusted with the responsibility of assessing and managing these risks. Additionally, the expertise of these designated individuals or groups should be outlined in such detail as necessary to comprehensively describe the nature of their expertise. Second, a description of the processes these entities employ to stay informed about, and to monitor, the prevention, detection, mitigation, and remediation of cybersecurity incidents should be included. Third, companies should indicate if and how these individuals or committees convey information about such risks to the board of directors or potentially to a designated committee or subcommittee of the board.[21]

The disclosures required under Item 106(c) are aimed at balancing investor accessibility to information with the company’s ability to maintain autonomy in determining cybersecurity practices in the context of organizational structure; therefore, disclosures do not need to be overly detailed.[22]

  • Foreign Private Issuers (Form 6-K & Form 20-F)

The rules addressed above only apply to domestic companies, but the SEC imposed parallel cybersecurity disclosure requirements for foreign private issuers under Form 6-K (incident reporting) and Form 20-K (periodic reporting).[23]

Key Dates

The SEC’s final rules are effective as of September 5, 2023, but the Form 8-K and Regulation S-K reporting requirements have yet to take effect. The key compliance dates for each are as follows:

  • Form 8-K Item 1.05(a) Incident Reporting – December 18, 2023
  • Regulation S-K Periodic Reporting – Fiscal years ending on or after December 15, 2023

Smaller reporting companies are provided with an extra 180 days to comply with Form 8-K Item 1.05. Under this grant, small companies will be expected to begin incident reporting on June 15, 2024. No such extension was granted to smaller reporting companies with regard to Regulation S-K Periodic Reporting.[24]

Potential Impact On Cybersecurity Policy

The actual impact of the SEC’s new disclosure requirements will likely remain unclear for some time, yet the regulations compel companies to adopt a greater sense of discipline and transparency in their cybersecurity practices. Although the primary intent of these rules is investor protection, they may also influence how companies formulate their cybersecurity strategies, given the requirement to discuss such policies in their annual disclosures. This heightened level of accountability, regarding defensive measures and risk management strategies in response to cybersecurity threats, may encourage companies to implement more robust cybersecurity practices or, at the very least, ensure that cybersecurity becomes a regular topic of discussion amongst senior leadership. Consequently, the SEC’s initiative may serve as a catalyst for strengthening cybersecurity policies within corporate entities, while also providing investors with essential information for making informed decisions in the marketplace.

Further Information

The overview of the new SEC rules governing cybersecurity disclosures provided above is precisely that: an overview. For more information regarding the requirements and applicability of these rules please refer to the official rules and the SEC website.

Notes

[1] Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure, Exchange Act Release No. 33-11216, Exchange Act Release No. 34-97989 (July 26, 2023) [hereinafter Final Rule Release], https://www.sec.gov/files/rules/final/2023/33-11216.pdf.

[2] Keman Huang et al., The Devastating Business Impact of a Cyber Breach, Harv. Bus Rev., May 4, 2023, https://hbr.org/2023/05/the-devastating-business-impacts-of-a-cyber-breach.

[3] Id.

[4] Final Rule Release, supra note 1, at 12

[5] Id. at 49.

[6] Id. at 76.

[7] Id. at 14.

[8] TSC Indus. v. Northway, 426 U.S. 438, 449 (1976).

[9] Id. at 450.

[10] Id. at 448.

[11] Final Rule Release, supra note 1, at 30.

[12] Id. at 31.

[13] Id. at 32.

[14] Id. at 28.

[15] Id. at 50–51.

[16] Id. at 61.

[17] Id. at 63.

[18] Id.

[19] Id. at 60.

[20] Id. at 12.

[21] Id. at 70.

[22] Id.

[23] Id. at 12.

[24] Id. at 107.


The Policy Future for Telehealth After the Pandemic

Jack Atterberry, MJLST Staffer

The Pandemic Accelerated Telehealth Utilization

Before the Covid-19 pandemic began, telehealth usage in the United States healthcare system was insignificant (rounding to 0%) as a percentage of total outpatient care visits.[1] In the two years after the beginning of the pandemic, telehealth usage soared to over 10% of outpatient visits and has been widely used across all payer categories including Medicare and Medicaid.[2] The social distancing realities during the pandemic years coupled with federal policy measures allowed for this radical transition toward telehealth care visits.

In response to the onset of Covid-19, the US federal government relaxed and modified many telehealth regulations which have expanded the permissible access of telehealth care services. After a public health emergency was declared in early 2020, the Center for Medicare & Medicaid Services (CMS) and the Department of Health and Human Services (HHS) modified preexisting telehealth-related regulations to expand the permissible use of those services.  Specifically, CMS temporarily expanded Medicare coverage to include telehealth services without the need for in-person visits, removed telehealth practice restrictions such as expanding the type of providers that could provide telehealth, and increased the reimbursement rates for telehealth services to bring them closer to in-person visit rates.[3] In addition, HHS implemented modifications such as greater HIPAA flexibility by easing requirements around using popular communication platforms such as Zoom, Skype, and FaceTime provided that they are used in good faith.[4]  Collectively, these changes helped lead to a significant rise in telehealth services and expanded access to care for many people that otherwise would not receive healthcare.  Unfortunately, many of these telehealth policy provisions are set to expire in 2024, leaving open the question of whether the benefits of telehealth care expansion will be here to stay after the public emergency measures end.[5]

Issues with Telehealth Care Delivery Between States

A big legal impediment to telehealth expansion in the US is the complex interplay of state and federal laws and regulations impacting telehealth care delivery. At the state level, key state differences in the following areas have historically held back the expansion of telehealth.  First, licensing and credentialing requirements for healthcare providers are most often licensed at the state level – this has created a barrier for providers who want to offer telehealth services across state lines. While many states have implemented temporary waivers or joined interstate medical licensure compacts to address this issue during the pandemic, many states have not done so and huge inconsistencies exist. Besides these issues, states also differ with regard to reimbursement policy as states differ significantly in how different payer types insure differently in different regions—this has led to confusion for providers about whether to deliver care in certain states for fear of not getting reimbursed adequately. Although the federal health emergency helped ease interstate telehealth restrictions since the pandemic started, these challenges will likely persist after the temporary telehealth measures are lifted at the end of 2024.

What the pandemic-era temporary easing of telehealth restrictions taught us is that interstate telehealth improves health outcomes, increases patient satisfaction, and decreases gaps in care delivery.  In particular, rural communities and other underserved areas with relatively fewer healthcare providers benefited greatly from the ability to receive care from an out of state provider.  For example, patients in states like Montana, North Dakota, and South Dakota benefit immensely from being able to talk with an out of state mental health provider because of the severe shortages of psychiatrists, psychologists, and other mental health practitioners in those states.[6]  In addition, a 2021 study by the Bipartisan Policy Center highlighted that patients in states which joined interstate licensure compacts experienced a noticeable improvement in care experience and healthcare workforces experienced a decreased burden on their chronically stressed providers.[7]  These positive outcomes resulting from eased interstate healthcare regulations should inform telehealth policy moving forward.

Policy Bottlenecks to Telehealth Care Access Expansion

The presence of telehealth in American healthcare is surprisingly uncertain as the US emerges from the pandemic years.  As the public health emergency measures which removed various legal and regulatory barriers to accessing telehealth expire next year, many Americans could be left without access to healthcare via telehealth services. To ensure that telehealth remains a part of American healthcare moving forward, federal and state policy makers will need to act to bring about long term certainty in the telehealth regulatory framework.  In particular, advocacy groups such as the American Telehealth Association recommend that policy makers focus on key policy changes such as removing licensing barriers to interstate telehealth care, modernizing reimbursement payment structures to align with value-based payment principles, and permanently adopting pandemic-era telehealth access for Medicare, Federally Qualified Health Centers, and Rural Health Clinics.[8]  In addition, another valuable federal regulatory policy change would be to continue allowing the prescription of controlled substances without an in-person visit.  This would entail modifying the Ryan Haight Act, which requires an in-person medical exam before prescribing controlled substances.[9]  Like any healthcare reform in the US, cementing these lasting telehealth policy changes as law will be a major uphill battle.  Nonetheless, expanding access to telehealth could be a bipartisan policy opportunity for lawmakers as it would bring about expanded access to care and help drive the transition toward value-based care leading to better health outcomes for patients.

Notes

[1] https://www.healthsystemtracker.org/brief/outpatient-telehealth-use-soared-early-in-the-covid-19-pandemic-but-has-since-receded/

[2] https://www.cms.gov/newsroom/press-releases/new-hhs-study-shows-63-fold-increase-medicare-telehealth-utilization-during-pandemic#:~:text=Taken%20as%20a%20whole%2C%20the,Island%2C%20New%20Hampshire%20and%20Connecticut.

[3] https://telehealth.hhs.gov/providers/policy-changes-during-the-covid-19-public-health-emergency

[4] Id.

[5] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[6] https://thinkbiggerdogood.org/enhancing-the-capacity-of-the-mental-health-and-addiction-workforce-a-framework/?_cldee=anVsaWFkaGFycmlzQGdtYWlsLmNvbQ%3d%3d&recipientid=contact-ddf72678e25aeb11988700155d3b3c69-e949ac3beff94a799393fb4e9bbe3757&utm_source=ClickDimensions&utm_medium=email&utm_campaign=Health%20%7C%20Mental%20Health%20Access%20%7C%2010.19.21&esid=e4588cef-7520-ec11-b6e6-002248246368

[7] https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2021/11/BPC-Health-Licensure-Brief_WEB.pdf

[8] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[9] https://www.aafp.org/pubs/fpm/issues/2021/0500/p9.html


Perhaps Big Tech Regulation Belongs on Congress’s for You Page

Kira Le, MJLST Staffer

On Thursday, March 23, 2023, TikTok CEO Shou Zi Chew testified before a Congressional panel for five hours in order to convince Congress that the social media platform should not be banned in the United States. The hearing came one week after reports surfaced that the Committee on Foreign Investment was threatening a ban unless TikTok’s parent company ByteDance sells its share of the company.[1] Lawmakers on both sides of the aisle, as well as FBI officials, are allegedly concerned with the possibility of the Chinese government manipulating users’ experience on the platform or threatening the security of the data of its more than 150 million users in the United States.[2] Despite Chew’s testimony that TikTok plans to contract with U.S. tech giant Oracle to store U.S. data on U.S. servers on U.S. soil, preventing Chinese interference on the platform and recommending content to U.S. users through Oracle infrastructure, lawmakers were not convinced, and not a single one offered support for TikTok.[3]

In terms of what’s to come for TikTok’s future in the United States, Senator Marco Rubio updated his website on Monday, March 27, 2023 with information on “when TikTok will be banned,” claiming his proposed ANTI-SOCIAL CCP Act is the only bipartisan, bicameral legislation that would actually prevent TikTok from operating in the United States.[4] In order to cut off the platform’s access to critical functions needed to remain online, the proposed statute would require the president to use the International Emergency Economic Powers Act to block and prohibit all transactions with TikTok, ByteDance, and any subsidiary or successor within 30 days.[5] Senator Rubio explains that the proposed legislation “requires the president to block and prohibit transactions with social media companies owned or otherwise controlled by countries or entities of concern.”[6] Reuters reports that The White House supports the Senate bill known as the RESTRICT Act.[7] However, former President Trump made an almost identical attempt to ban the app in 2020.[8]TikTok was successful in quashing the effort, and would almost certainly challenge any future attempts.[9] Further, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, “To justify a TikTok ban, the government would have to demonstrate that privacy and security concerns can’t be addressed in narrower ways. The government hasn’t demonstrated this, and we doubt it could. Restricting access to a speech platform that is used by millions of Americans every day would set a dangerous precedent for regulating our digital public sphere more broadly.”[10]

Despite what Congress may want the public to think, it certainly has other options for protecting Americans and their data from Big Tech companies like TikTok. For example, nothing is stopping U.S. lawmakers from following in the footsteps of the European Parliament, which passed the Digital Markets Act just last year.[11] Although the main purpose of the Act is to limit anticompetitive conduct by large technology companies, it includes several provisions on protecting the personal data of users of defined “gatekeeper” firms. Under the Act, a gatekeeper is a company that provides services such as online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communications services; operating systems; web browsers; and online advertising services that are gateways for business to reach end users.[12] The Digital Markets Act forbids these gatekeepers from processing the personal data of end users for the purpose of providing online advertisement services, combining or cross-using their personal data, or signing users into other services in order to combine their personal data without their explicit consent.[13]

The penalties associated with violations of the Act give it some serious teeth. For noncompliance, the European Commission may impose a fine of up to 10% of the offending gatekeeper’s total worldwide turnover in the preceding year in the first instance, and up to 20% if the gatekeeper has committed the same or a similar infringement laid out in specific articles at some point in the eight preceding years.[14] For any company, not limited to gatekeepers, the Commission may impose a fine of up to 1% of total worldwide turnover in the preceding year for failing to provide the Commission with information as required by various articles in the Act. Finally, in order to compel any company to comply with specific decisions of the Commission and other articles in the regulation, the Commission may impose period penalty payments of up to 5% of the average daily worldwide turnover in the preceding year, per day.[15]

If U.S. lawmakers who have backed bipartisan legislation giving President Biden a path to ban TikTok are truly concerned about preventing the spread of misinformation on the platform, who truly believe, as Representative Gus Bilirakis claims to, that it is “literally leading to death” and that “[w]e must save our children from big tech companies” who allow harmful content to be viewed and spread without regulation, then perhaps Congress should simply: regulate it.[16] After the grueling congressional hearing, the Chinese foreign ministry stated in a regular news briefing that it has never asked companies “to collect or provide data from abroad to the Chinese government in a way that violated local laws…”[17]During his testimony, Chew also argued that TikTok is no different than other social media giants, and has even sought to put stronger safeguards in place as compared to its competitors.[18] Granted, some lawmakers have expressed support for comprehensive data privacy legislation that would apply to all tech companies.[19] Perhaps it would be more fruitful for U.S. lawmakers to focus on doing so.

Notes

[1] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[2] Id.

[3] Id.; David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[4] FAQ: When Will TikTok Be Banned?, MARCO RUBIO US SENATOR FOR FLORIDA (Mar. 27, 2023), https://www.rubio.senate.gov/public/index.cfm/press-releases?ContentRecord_id=C5313B3F-8173-4DC8-B1D9-9566F3E2595C.

[5] Id.

[6] Id.

[7] Factbox: Why a Broad US TikTok Ban is Unlikely to Take Effect Soon, REUTERS (Mar. 23, 2023), https://www.reuters.com/technology/why-broad-us-tiktok-ban-is-unlikely-take-effect-soon-2023-03-23/.

[8] Id.

[9] Id.

[10] Id.

[11] Council Regulation (EU) 2022/1925 on Contestable and Fair Markets in the Digital Sector, 2022 O.J. L 265/1 [hereinafter Digital Markets Act].

[12] Id., Art. 3, 2022 O.J. L 265/28, 30.

[13] Id. art. 5, at 33.

[14] Id. art. 30, at 51, 52.

[15] Id. art. 17, at 44.

[16] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[17] David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[18] Daniel Flatley, Five Key Moments From TikTok CEO’s Combative Hearing in Congress, BLOOMBERG (Mar. 23, 2023), https://www.bloomberg.com/news/articles/2023-03-23/five-key-moments-from-tiktok-ceo-s-combative-hearing-in-congress#xj4y7vzkg.

[19] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.


Data Privacy Regulations in 2023: Is the New Standard Burdensome?

Yolanda Li, MJLST Staffer

Beginning in 2023, businesses will see enhanced regulations on data privacy. There has been an increase in legal requirements for company-held data in protection of companies’ customers as a number of proposed data security laws and regulations came into effect in 2023. Specifically, the FTC Safeguards Rule and the NIS2 Directive.

The FTC Safeguards Rule

The FTC Safeguards Rule came into force in December 2022. The FTC requires non-banking financial institutions “to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe.”[1] Non-banking financial institutions affected by this rule include mortgage brokers, motor vehicle dealers, and payday lenders. The Safeguards Rule is promulgated under the Gramm-Leach-Bliley Act of 1999, which requires financial institutions to “explain their information-sharing practices to their customers and to safeguard sensitive data.”[2] Financial institutions include companies that offer consumer financial products or services like loans, insurance, and financial or investment advice.[3] Specifically, the rule required that the covered financial institutions “designate a qualified individual to oversee their information security program, develop a written risk assessment, limit and monitor who can access sensitive customer information, encrypt all sensitive information, train security personnel, develop an incident response plan, periodically assess the security practices of service providers, and implement multi-factor authentication or another method with equivalent protection for any individual accessing customer information.”

One specific question that arises is whether the FTC Safeguards Rule will truly elevate data privacy standards. On its face the FTC Safeguards Rule does not run counter to the FTC’s mission of protecting consumers. However, the economic cost and effect behind the rule is debatable. One concern is that the rule may impose substantial costs, especially on small businesses, as the new burdens will render costs that may be unbearable for small businesses with less capital than large companies. According to Commissioner Christine S. Wilson, although financial institutions are already implementing many of the requirements under the rule, or have sophisticated programs that are easily adaptable to new obligations, there are still large burdens underestimated by the FTC Safeguards Rule.[4] Specifically, labor shortages have hampered efforts by financial institutions to implement information security systems. Supply chain issues caused delays in obtaining equipment for updating information systems. What is important to note is, according to Commissioner Wilson, most of these factors are outside the control of the financial institutions. Implementing a heightened standard would thus cause unfairness, especially to small financial institutions who have even more trouble obtaining the necessary equipment during times of supply chain and labor shortages.

Recognizing such difficulties, the FTC did offer a certain extent of leniency for implementation of the rule. Specifically, the FTC extended the deadline by six months, primarily due to supply chain issues that may result in delays and shortage of qualified personnel to implement information security programs. This extension is beneficial to the Rule because it offers the covered financial institutions time for adjustment and compliance.

Another concern that the FTC Safeguards Rule has raised is that the mandates will not result in a significant reduction in data security risks in protecting customers. The answer to this question is still uncertain as the FTC Safeguards Rule just came into effect, and the extension pushes out implementation even farther. One thing to note, however, is that during the rule-making process the FTC sought comments on the proposed Safeguards Rule and during that time extended the deadline for the public to submit comments to changes by 60 days in.[5] This fact may show that the FTC took careful consideration of how to most effectively reduce data security risks by giving the public ample time to weigh in.

NIS2 Directive

A corresponding law is the NIS2 Directive by the EU that came into force on January 16, 2023. This EU-wide legislation provides a variety of legal measures to boost cybersecurity. Specifically, it requires member states to be appropriately equipped with response and information systems, set up a Corporation Group to facilitate corporate exchange of information among member states, and ensure a culture of security that relies heavily on infrastructures, including financial market infrastructure.[6] The Directive also contains a variety of security and notification requirements for service providers to comply with. The NIS2 Directive echoes the FTC Safeguards Rule to a large extent regarding the elevated standard of cybersecurity measures.

However, the NIS2 Directive contains a different measure by implementing duties onto the European Union Agency for Cybersecurity (ENISA) itself. The Directive designates that ENISA assists Member States and the Corporation Groups set up under the Directive by “identifying good practices in the Member States regarding the implementation of the NIS directive, supporting the EU-wide reporting process for cybersecurity incidents, by developing thresholds, templates and tools, agreeing on common approaches and procedures, and helping Member States to address common cybersecurity issues.”[7] The Directive ordering the agency itself to facilitate the carrying out of the Directive may add to the likelihood of success. Although the outcome is uncertain, primarily because of the broad language of the Directive, at least burdens on financial institutions will be lessened to a certain extent. What distinguishes the NIS2 Directive from the FTC Safeguards Rule is that the Member States are given 21 months to transpose to their national legislative framework.[8] This time offers more flexibility as compared to the extension of the FTC Safeguards Rule. As the Directive passes through the legislative framework, more time will be allowed for financial institutions to prepare and respond to the proposed changes.

In summary, data privacy laws are tightening up globally, and the United States should look to and learn from the successes and failures of the EU’s Directive as both countries’ are attempting to do regulate a similar industry. That being said, regardless of the EU, financial institutions in the United States must begin paying attention to and complying with the FTC Safeguards Rule. Though the outcome of the Rule is uncertain, the 6-month extension will at least offer a certain degree of flexibility.

Notes

[1]https://www.ftc.gov/news-events/news/press-releases/2022/11/ftc-extends-deadline-six-months-compliance-some-changes-financial-data-security-rule; 16 CFR 314.

[2] https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act.

[3] Id.

[4] Concurring Statement of Commissioner Christine S. Wilson, Regarding Delaying the Effective Date of Certain Provisions of the Recently Amended Safeguards Rule (Nov 2022).

[5] https://www.ftc.gov/news-events/news/press-releases/2019/05/ftc-extends-comment-deadline-proposed-changes-safeguards-rule.

[6] https://digital-strategy.ec.europa.eu/en/policies/nis2-directive.

[7] https://www.enisa.europa.eu/topics/cybersecurity-policy/nis-directive-new#:~:text=On%2016%20January%202023%2C%20the,cyber%20crisis%20management%20structure%20(CyCLONe).

[8] Id.

 


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Meta Faces Class Action Lawsuits Over Pixel Tool Data Controversy

Ray Mestad, MJLST Staffer

With a market capitalization of $341 billion, Meta Platforms is one of the most valuable companies in the world.[1] Information is a prized asset for Meta, but how that information is acquired continues to be a source of conflict. Their Meta “Pixel” tool is a piece of code that allows websites to track visitor activity.[2] However, what Meta does with the data after it is acquired may be in violation of a variety of privacy laws. Because of that, Meta is now facing almost fifty class action lawsuits due to Pixel’s use of data from video players and healthcare patient portals.[3]

What is Pixel?

Pixel is an analytical tool that tracks visitor actions on a website.[4] In theory, the actions that are tracked include purchases, registrations, cart additions, searches and more. This information can then be used by the website owners to better understand user behavior. Website owners can more efficiently use ad spend by tailoring ads to relevant users and finding more receptive users based on Pixel’s analysis.[5]

In the world of search engine optimization and web analysis tools like Pixel are common, and there are other sites, like Google Analytics, that provide similar functions. However, there are two key differences between these other tools and Pixel. First, Pixel has in some cases accidentally scraped private, identifiable information from websites. Second, Pixel can connect that information to the social profiles on their flagship website, Facebook. Whether intentionally or accidentally, Pixel has been found to have grabbed personal information beyond the simple user web actions it was supposed to be limited to and connected them to Facebook profiles.[6]

Pixel and Patient Healthcare Information

It’s estimated that, until recently, one third of the top 100 hospitals in the country used Pixel on their websites.[7] However, that number may decrease after Meta’s recent data privacy issues. Meta faced both criticism and legal action in the summer of 2022 for its treatment of user data on healthcare websites. Pixel incorrectly retrieved private patient information, including names, conditions, email addresses and more. Meta then targeted hospital website users with ads on Facebook, using the information Pixel collected from hospital websites and patient portals by matching user information with their Facebook accounts.[8] Novant Health, a healthcare provider, ran advertisements promoting vaccinations in 2020. They then added Pixel code to their website to evaluate the effectiveness of the campaign. Pixel proceeded to send private and identifiable user information to Meta.[9] Another hospital (and Meta’s co-defendant in the lawsuit), the University of California San Francisco and Dignity Health (“UCSF”), was accused of illegally gathering patient information via Pixel code on their patient portal. Private medical information was then distributed to Meta. At some point, it is claimed that pharmaceutical companies then gained access to this medical information and sent out targeted ads based thereon.[10] That is just one example – all in all, more than 1 million patients have been affected by this Pixel breach.[11] 

Pixel and Video Tracking

The problems did not stop there. Following its patient portal controversy, Meta again faced criticism for obtaining protected user data with Pixel, this time in the context of video consumption. There are currently 47 proposed class actions against Meta for violations of the Video Privacy Protection Act (the “VPPA”). The VPPA was created in the 1980’s to cover videotape and audio-visual materials. No longer confined to the rental store, the VPPA has now taken on a much broader meaning after the growth of the internet. 

These class actions accuse Meta of using the Pixel tool to take video user data from a variety of company websites, including the NFL, NPR, the Boston Globe, Bloomberg Law and many more. The classes allege that by collecting video viewing activity in a personally identifiable manner without consent (matching Facebook user IDs to the activity rather than anonymously), so Pixel users could target their ads at the viewers, Pixel violated the VPPA. Under the VPPA Meta is not the defendant in these lawsuits, but rather the companies that shared user information with Meta.[12]

Causes of Action

The relatively new area of data privacy is scarcely litigated by the federal government due to the lack of statutes protecting consumer privacy on the federal level. Because of that, the number of data protection civil litigants can be expected to continue to grow. [13] HIPAA is the Health Insurance Portability and Accountability Act, an act created in 1996 to protect patient information from disclosure without patient consent. In the patient portal cases, HIPAA actions would have to be initiated by the US government. Claimants are therefore suing Meta under consumer protection and other privacy laws like the California Confidentiality of Medical Information Act, the Federal Wiretap Act, and the Comprehensive Computer Data Access and Fraud Act instead.[14] These state Acts allow individuals to sue, when under Federal Acts like HIPPA, the Government may move slowly, or not at all. And in the cases of video tracking, the litigants may only sue the video provider, not Meta itself.[15] Despite that wrinkle of benefit to Meta, their involvement in more privacy disputes is not ideal for the tech giant as it may hurt the trustworthiness of Meta Platforms in the eyes of the public.

Possible Outcomes

If found liable, the VPPA violations could result in damages of $2,500 per class member.[16] Punitive damages for the healthcare data breaches could run in the millions as well and would vary state to state due to the variety of acts the claims are brought in violation of.[17] Specifically, in the UCSF data case class members are seeking punitive damages of $5 million.[18] One possible hang-up that may become an issue for claimants are arbitration agreements. If the terms and conditions of either hospital patient portals or video provider websites contain arbitration clauses, litigants may have difficulty overcoming them. On the one hand, these terms and conditions may be binding and force the parties to attend mandatory arbitration meetings. On the other hand, consumer rights attorneys may argue that consent needs to come from forms separate from online user agreements.[19] If more lawsuits emerge due to the actions of Pixel, it is quite possible that companies will move away from the web analytics tools to avoid potential liability. It remains to be seen whether the convenience and utility of Meta Pixel stops being worth the risk the web analytics tools present to websites.

Notes

[1] Meta Nasdaq, https://www.google.com/finance/quote/META:NASDAQ (last visited Oct. 21, 2022).

[2] Meta Pixel, Meta for Developers, https://developers.facebook.com/docs/meta-pixel/.

[3] Sky Witley, Meta Pixel’s Video Tracking Spurs Wave of Data Privacy Suits, (Oct. 13, 2022, 3:55 AM), Bloomberg Law, https://news.bloomberglaw.com/privacy-and-data-security/meta-pixels-video-tracking-spurs-wave-of-consumer-privacy-suits.

[4] Meta Pixel, https://adwisely.com/glossary/meta-pixel/ (last visited Oct. 21, 2022).

[5] Ted Vrountas, What Is the Meta Pixel & What Does It Do?, https://instapage.com/blog/meta-pixel.

[6] Steve Adler, Meta Facing Further Class Action Lawsuit Over Use of Meta Pixel Code on Hospital Websites, HIPPA Journal (Aug. 1, 2022), https://www.hipaajournal.com/meta-facing-further-class-action-lawsuit-over-use-of-meta-pixel-code-on-hospital-websites/.

[7] Id.

[8] Id.

[9] Bill Toulas, Misconfigured Meta Pixel exposed healthcare data of 1.3M patients, Bleeping Computer (Aug. 22, 2022, 2:16 PM), https://www.bleepingcomputer.com/news/security/misconfigured-meta-pixel-exposed-healthcare-data-of-13m-patients/.

[10] Adler, supra note 6.

[11] Toulas, supra note 9.

[12] Witley, supra note 3. 

[13] Id.

[14] Adler, supra note 6.

[15] Witley, supra note 3.

[16] Id

[17] Dave Muoio, Northwestern Memorial the latest hit with a class action over Meta’s alleged patient data mining, Fierce Healthcare (Aug. 12, 2022 10:30AM), https://www.fiercehealthcare.com/health-tech/report-third-top-hospitals-websites-collecting-patient-data-facebook.

[18] Id.

[19] Witley, supra note 3.




Freedom to Moderate? Circuits Split Over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.


Digital Literacy, a Problem for Americans of All Ages and Experiences

Justice Shannon, MJLST Staffer

According to the American Library Association, “digital literacy” is “the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills.” Digital literacy is a term that has existed since the year 1997. Paul Gilster coined Digital literacy as “the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers.” In this way, the definition of digital literacy has broadened from how a person absorbs digital information to how one develops, absorbs, and critiques digital information.

The Covid-19 Pandemic taught Americans of all ages the value of Digital literacy. Elderly populations were forced online without prior training due to the health risks presented by Covid-19, and digitally illiterate parents were unable to help their children with classes.

Separate from Covid-19, the rise of crypto-currency has created a need for digital literacy in spaces that are not federally regulated.

Elderly

The Covid-19 pandemic did not create the need for digital literacy training for the elderly. However, the pandemic highlighted a national need to address digital literacy among America’s oldest population. Elderly family members quarantined during the pandemic were quickly separated from their families. Teaching family members how to use Zoom and Facebook messenger became a substitute for some but not all forms of connectivity. However, teaching an elderly family member how to use Facebook messenger to speak to loved ones does not enable them to communicate with peers or teach them other digital literacy skills.

To address digital literacy issues within the elderly population states have approved Senior Citizen Technology grants. Pennsylvania’s Department of Aging has granted funds to adult education centers for technology for senior citizens. Programs like this have been developing throughout the nation. For example, Prince George’s Community College in Maryland uses state funds to teach technology skills to its older population.

It is difficult to tell if these programs are working. States like Pennsylvania and Maryland had programs before the pandemic. Still, these programs alone did not reduce the distance between America’s aging population and the rest of the nation during the pandemic. However, when looking at the scale of the program in Prince George’s County, this likely was not the goal. Beyond that, there is a larger question: Is the purpose of digital literacy for the elderly to ensure that they can connect with the world during a pandemic, or is the goal simply ensuring that the elderly have the skills to communicate with the world? With this in mind, programs that predate the pandemic, such as the programs in Pennsylvania and Maryland, likely had the right approach even if they weren’t of a large enough scale to ensure digital literacy for the entirety of our elderly population.

Parents

The pandemic highlighted a similar problem for many American families. While state, federal, and local governments stepped up to provide laptops and access to the internet, many families still struggled to get their children into online classes; this is an issue in what is known as “last mile infrastructure.”During the pandemic, the nation quickly provided families with access to the internet without ensuring they were ready to navigate it. This left families feeling ill-prepared to support their children’s educational growth from home. Providing families with access to broadband without digital literacy training disproportionately impacted families of color by limiting their children’s growth capacity online compared to their peers. While this wasn’t an intended result, it is a result of hasty bureaucracy in response to a national emergency. Nationally, the 2022 Workforce Innovation Opportunity Act aims to address digital literacy issues among adults by increasing funding for teaching workplace technology skills to working adults. However, this will not ensure that American parents can manage their children’s technological needs.

Crypto

Separate from issues created by Covid-19 is cryptocurrency. One of the largest selling points of cryptocurrency is that it is largely unregulated. Users see it as “digital gold, free from hyper-inflation.”While these claims can be valid, consumers frequently are not aware of the risks of cryptocurrency. Last year the Chair of the SEC called cryptocurrencies “the wild west of finance rife with fraud, scams, and abuse.”This year the Department of the Treasury announced they would release instructional materials to explain how cryptocurrencies work. While this will not directly regulate cryptocurrencies providing Americans with more tools to understand cryptocurrencies may help reduce cryptocurrency scams.

Conclusion

Addressing digital literacy has been a problem for years before the Covid-19 pandemic. Additionally, when new technologies become popular, there are new lessons to learn for all age groups. Covid-19 appropriately shined a light on the need to address digital literacy issues within our borders. However, if we only go so far as to get Americans networked and prepared for the next national emergency, we’ll find that there are disparities between those who excel online and those who are are ill-equipped to use the internet to connect with family, educate their kids, and participate in e-commerce.


What the SolarWinds Hack Means for the Future of Law Firm Cybersecurity?

Sam Sylvan, MJLST Staffer

Last December, the massive software company SolarWinds acknowledged that its popular IT-monitoring software, Orion, was hacked earlier in the year. The software was sold to thousands of SolarWinds’ clients, including government and Fortune 500 companies. A software update of Orion provided Russian-backed hackers with a backdoor into the internal systems of approximately 18,000 SolarWinds customers—a number that is likely to increase over time as more organizations discover that they also are victims of the hack. Even the cybersecurity company FireEye that first identified the hack had learned that its own systems were compromised.

The hack has widespread implications on the future of cybersecurity in the legal field. Courts and government attorneys were not able to avoid the Orion hack. The cybercriminals were able to hack into the DOJ’s internal systems, leading the agency to report that the hackers might have breached 3,450 DOJ email inboxes. The Administrative Office of the U.S. Courts is working with DHS to audit vulnerabilities in the CM/ECF system where highly sensitive non-public documents are filed under seal. Although, as of late February, no law firms had announced that they too were victims of the hack, likely because law firms do not typically use Orion software for their IT management, the Orion hack is a wakeup call to law firms across the country regarding their cybersecurity. There have been hacks, including hacks of law firms, but nothing of this magnitude or potential level of sabotage. Now more than ever law firms must contemplate and implement preventative measures and response plans.

Law firms of all sizes handle confidential and highly sensitive client documents and data. Oftentimes, firms have IT specialists but lack cybersecurity experts on the payroll—somebody internal who can aid by continuing to develop cybersecurity defenses. The SolarWinds hack shows why this needs to change, particularly for law firms that handle an exorbitant amount of highly confidential and sensitive client documents and can afford to add these experts to their ranks. Law firms relying exclusively on consultants or other third parties for cybersecurity only further jeopardizes the security of law firms’ document management systems and caches of electronically stored client documents. Indeed, it is reliance on third-party vendors that enabled the SolarWinds hack in the first place.

In addition to adding a specialist to the payroll, there are a number of other specific measures that law firms can take in order to address and bolster their cybersecurity defenses. For those of us who think it is not a matter of “if” but rather “when,” law firms should have an incident response plan ready to go. According to Jim Turner, chief operating officer of Hilltop Consultants, many law firms do not even have an incident response plan in place.

Further, because complacency and outdated IT software is of particular concern for law firms, “vendor vulnerability assessments” should become commonplace across all law firms. False senses of protection need to be discarded and constant reassessment should become the norm. Moreover, firms should upgrade the type of software protection they have in place to include endpoint detection and response (EDR), which uses AI to detect hacking activity on systems. Last, purchasing cyber insurance is a strong safety measure in the event a law firm has to respond to a breach. It would allow for the provision of additional resources needed to effectively respond to hacks.