Articles by mjlst

Enriching and Undermining Justice: The Risks of Zoom Court

Matthew Prager, MJLST Staffer

In the spring of 2022, the United States shut down public spaces in response to the COVID-19 pandemic. The court system did not escape this process, seeing all jury trials paused in March 2022.[1] In this rapidly changing environment, courts scrambled to adjust using a slew of modern telecommunication and video conferencing systems to resume the various aspects of the courtroom system in the virtual world. Despite this radical upheaval to traditional courtroom structure, this new form of court appears here to stay.[2]

Much has been written about the benefits of telecommunication services like Zoom and similar software to the courtroom system.[3]  However, while Zoom court has been a boon to many, Zoom-style virtual court appearances also present legal challenges.[4] Some of these problems affect all courtroom participants, while others disproportionally affect highly vulnerable individuals’ ability to participate in the legal system.

Telecommunications, like all forms of technology, is vulnerable to malfunctions and ‘glitches’, and these glitches can have significant disadvantage on a party’s engagement with the legal system. In the most direct sense, glitches– be they video malfunction, audio or microphone failure, or unstable internet connections–can limit a party’s ability to hear and be heard by their attorneys, opposing parties or judge, ultimately compromising their legitimate participation in the legal process.[5]

But these glitches can have effects beyond affecting direct communications. One study found participants evaluated individuals suffering from connection issues as less likable.[6] Another study using mock jurors, found those shown a video on a broken VCR recommend higher prison terms than a group of mock jurors provided with a functional VCR.[7] In effect, technology can act as a third party in a courtroom, and when that third party misbehaves, frustrations can unjustly prejudice a party with deleterious consequences.

Even absent glitches, observing a person through a screen can have a negative impact on how that person is perceived.[8] Researchers have noted this issue even before the pandemic. Online bail hearings conducted by closed-circuit camera led to significantly higher bond amounts than those conducted in person.[9] Simply adjusting the camera angle can alter the perception of a witness in the eyes of the observer.[10]

These issues represent a universal problem for any party in the legal system, but they are especially impactful on the elderly population.[11] Senior citizens often lacks digital literacy with modern and emerging technologies, and may even find their first experience with these telecommunications systems is in a courtroom hearing– that is if they even have access to the necessary technology.[12] These issues can have extreme consequences, with one case of an elderly defendant violating their probation because they failed to navigate a faulty Zoom link.[13]  The elderly are especially vulnerable, as issues with technical literacy can be compounded by sensory difficulties. One party with bad eyesight found requiring communication through a screen functionally deprived him of any communication at all.[14]

While there has been some effort to return to the in-person court experience, the benefits of virtual trials are too significant to ignore.[15] Virtual court minimizes transportation costs, allows vulnerable parties to engage in the legal system from the safety and familiarity of their own home and simplifies the logistical tail of the courtroom process. These benefits are indisputable for many participants in the legal system. But these benefits are accompanied by drawbacks, and practicalities aside, the adverse and disproportionate impact on senior citizens in virtual courtrooms should be seen as a problem to solve and not simply endure.

Notes

[1] Debra Cassens Weiss, A slew of federal and state courts suspend trials or close for coronavirus threat, ABA JOURNAL (March 18, 2020) (https://www.abajournal.com/news/article/a-slew-of-federal-and-state-courts-jump-on-the-bandwagon-suspending-trials-for-coronavirus-threat)

[2] How Courts Embraced Technology, Met the Pandemic Challenge, and Revolutionized Their Operations, PEW, December 1, 2021 (https://www.pewtrusts.org/en/research-and-analysis/reports/2021/12/how-courts-embraced-technology-met-the-pandemic-challenge-and-revolutionized-their-operations).

[3] See Amy Petkovsek, A Virtual Path to Justice: Paving Smoother Roads to Courtroom Access, ABA (June 3, 2024) (https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/technology-and-the-law/a-virtual-path-to-justice) (finding that Zoom court: minimizes transportation costs for low-income, disabled or remote parties; allows parties to participate in court from a safe or trusted environment; minimizes disruptions for children who would otherwise miss entire days of school; protects undocumented individuals from the risk of deportation; diminishes courtroom reschedulings from parties lacking access to childcare or transportation and allows immune-compromised and other high health-risk parties to engage in the legal process without exposure to transmittable illnesses).

[4] Daniel Gielchinsky, Returning to Court in a Post-COVID Era: The Pros and Cons of a Virtual Court System, LAW.com (https://www.law.com/dailybusinessreview/2024/03/15/returning-to-court-in-a-post-covid-era-the-pros-and-cons-of-a-virtual-court-system/)

[5] Benefits & Disadvantages of Zoom Court Hearings, APPEL & MORSE, (https://www.appelmorse.com/blog/2020/july/benefits-disadvantages-of-zoom-court-hearings/) (last visited Oct. 7, 2024).

[6] Angela Chang, Zoom Trials as the New Normal: A Cautionary Tale, U. CHI. L. REV. (https://lawreview.uchicago.edu/online-archive/zoom-trials-new-normal-cautionary-tale) (“Participants in that study perceived their conversation partners as less friendly, less active and less cheerful when there were transmission delays. . . .compared to conversations without delays.”).

[7] Id.

[8]  Id. “Screen” interactions are remembered less vividly and obscure important nonverbal social cues.

[9] Id.

[10] Shannon Havener, Effects of Videoconferencing on Perception in the Courtroom (2014) (Ph.D. dissertation, Arizona State University).

[11] Virtual Justice? A National Study Analyzing the Transition to Remote Criminal Court, STANFORD CRIMINAL JUSTICE CENTER, Aug. 2021, at 78.

[12] Id. at 79 (describing how some parties lack access to phones, Wi-Fi or any methods of electronic communication).

[13] Ivan Villegas, Elderly Accused Violates Probation, VANGUARD NEWS GROUP (October 21, 2022) (https://davisvanguard.org/2022/10/elderly-accused-violates-probation-zoom-problems-defense-claims/)

[14] John Seasly, Challenges arise as the courtroom goes virtual, Injustice Watch (April 22, 2020) (https://www.injusticewatch.org/judges/court-administration/2020/challenges-arise-as-the-courtroom-goes-virtual/)

[15] Kara Berg, Leading Michigan judges call for return to in-person court proceedings (Oct. 2, 2024, 9:36:00 PM), (https://detroitnews.com/story/news/local/michigan/2024/10/02/leading-michigan-judges-call-for-return-to-in-person-court-proceedings/75484358007/#:~:text=Courts%20began%20heavily%20using%20Zoom,is%20determined%20by%20individual%20judges).


The New Reefer Madness? New Laws Look to Regulate Hemp Products

Violet Butler, MJLST Staffer

In 2018, the federal government took a major step in shifting its policy towards the criminalization of marijuana. Included in the 2018 Farm Bill was a provision that legalized some hemp-derivative products, in particular CBD products with a low-level of THC.[1] While this was touted by the industry and activists as a major step forward, the move to increase regulations on these hemp products have recently gained steam.

But what exactly was legalized by the federal government? The 2018 Farm Bill legalized hemp and hemp derived products (including CBD) that contain no more than 0.3% THC.[2] It should be noted that most cannabis products are consumed for some form of intoxication[3] and, suffice it to say, intoxication does not arise from 0.3% THC. The 2018 Farm Bill legalized a very small subsection of cannabis products serving a limited range of uses. Under the law, if a product contains more than 0.3% THC it is legally classified as marijuana and is still illegal under the Controlled Substances Act. So, if these new products cannot be used as intoxicants, why is there a push for more regulations?

A reason for the push for further regulations gaining traction is the concern over synthetically produced cannabinoids. A report from the National Academy of Sciences, Engineering, and Medicine recently published a report urging the federal government to redefine what “hemp” means. This is in an effort to ban semi-synthetic cannabinoids derived from legal hemp products as these cannabinoids can mirror the intoxicating effects of marijuana.[4] By clamping down on these semi-synthetic products, the legal line between hemp, CBD, and marijuana can be more properly maintained.

Different states are taking different approaches to the new regulations on hemp products. One camp of lawmakers want to go back to the old regime where any miniscule trace of THC was illegal. This “total ban” approach is presently seen in new legislation passed in Arkansas. Arkansas’ Act 629 bans the “production and sale of products containing Delta 8, Delta 9 and Delta 10 and other THC isomers inside the state of Arkansas” in any capacity.[5] Currently on appeal in the Eighth Circuit, the act has been subject to a lawsuit from hemp companies claiming the state law is preempted by the 2018 Farm Bill.[6] Arkansas is not the only state to take a total ban approach. Missouri’s governor Mike Parsons recently signed an executive order banning all consumables containing “psychoactive cannabis products”—or hemp products containing even trace amounts of THC—outside of the state’s already regulated cannabis market.[7] While this is not as broad in scope as Arkansas’ ban, the wide-reaching ban restricts the sale of most non-marijuana cannabis products in the state.

However, some states have taken a different approach to regulating hemp products, particularly in its distribution. New Jersey recently banned any amount of THC from being sold to a person under the age of 21.[8] California governor Gavin Newsom took a similar approach, signing an emergency ban on all hemp products containing THC and restricting the sale of all other hemp products to the 21+ market.[9] Even the federal government might be looking to increase the regulations on hemp products. Senator Ron Wyden recently introduced a bill that would raise the age at which someone could buy hemp products to 21 and set more federal safety standards on the industry.[10]

So, why is there a push to change the laws around hemp now? It could come down to perceived health risks and a rise in hospitalizations. A study from the Nationwide Children’s Hospital found that there were over 3,000 calls to poison controls related to THC, including the those found in small doses of legal hemp products.[11] Although only about 16% of these calls resulted in hospitalizations, roughly half of admissions were for children under 6-years-old.[12] California Governor Newsom directly cited hospitalizations as one the principal reasons he signed his emergency order.[13]

People seem to be worried about the hemp products currently on the market, including CBD, but should they be? The jury is still out on the health effects of CBD. A report from the World Health Organization in 2018 said that CBD had a “good safety profile” and reported no evidence of detrimental effects from recreational consumption of pure CBD.[14] However, the AAMC notes that CBD is understudied and there could be adverse interactions if CBD is taken with other medications.[15]

Legislators and policy-makers need to be able to ensure the safety and well-being of their citizens without creating unnecessary barriers for a new and growing industry. One of the barriers that states are facing is— maybe ironically—the 2018 Farm Bill. The bill opened the door for hemp products that met the THC standards, and these state laws are running into friction with the federal law. While states are allowed (and expected) to regulate the hemp industry under the 2018 Farm Bill, the move by many states to put heavier restrictions on the amount of THC allowed in hemp products seems to be in conflict with federal law. The lawsuits from hemp producers so far have all revolved around the idea that these state regulations, which are more restrictive than the 2018 Farm Bill, are preempted by the federal legislation.[16] Under Article VI of the Constitution, federal laws are the “supreme law of the land” so the Farm Bill must preempt state law in some way, but the exact way it does so is unclear.[17] There are two different ideas on how the Farm Bill preempts state law. The first idea is that the hemp regulations laid out in the federal law are the most stringent that states can regulate. This is the interpretation that hemp producers prefer, and the theory that they are suing under. The second idea, the option preferred by states that are looking to increase regulations, is that the Farm Bill set the outer limit for regulations. In other words, states are free to increase the regulations on the industry, but the federal law provides a national baseline if states do not come up with their own regulation.

Court rulings on this issue may settle the debate, but there is always a risk of a circuit split forming as different Courts of Appeal hear and decide on different lawsuits. To clear confusion once and for all, the federal government could clarify the scope of regulatory power with new legislation, or the Supreme Court could decide the issue in its upcoming term. But, until then, the legal challenges are likely to keep mounting and leave the nascent hemp industry in lingo.

 

Notes

[1] Dennis Romero, Hemp Industry Expected to Blossom Under New Farm Bill, NBC News (Dec. 17, 2018, 4:02 PM), https://www.nbcnews.com/news/us-news/hemp-industry-expected-blossom-under-new-farm-bill-n947791. For clarification, CBD stands for cannabidiol, a product derived from hemp, often sold in gummy or oil form. THC stands for tetrahydrocannabinol, the psychoactive part of the marijuana plant that can get you high. THC often refers to what is known as delta-9 THC, a type of THC found in the marijuana plant.

[2] John Hudak, The Farm Bill, Hemp Legalization and the Status of CBD: An Explainer, Brookings Institution (Dec. 14, 2018), https://www.brookings.edu/articles/the-farm-bill-hemp-and-cbd-explainer/

[3] As the Brookings Institution points out, the extremely low levels of THC in now-legal hemp products means that these products cannot be used to get high.

[4] Sam Reisman, New Report Urges Feds to Take Larger Role in Pot Policy, Law360 (Sept. 26, 2024, 8:53 PM), https://plus.lexis.com/newsstand/law360/article/1883058/?crid=c6fd0d9a-971e-489f-a5c6-8c1725ffee87

[5] Dale Ellis, Federal Judge Blocks State’s New Law Banning Delta-8 THC Products, Arkansas Democrat Gazette (Sept. 7, 2023, 6:00 PM), https://www.arkansasonline.com/news/2023/sep/07/federal-judge-blocks-states-new-law-banning-delta-8-thc-products/

[6] Sam Reisman, Court Defers Ruling On Challenge To Arkansas Hemp Law, Law360 (Sept. 25, 2024, 6:50 PM), https://plus.lexis.com/newsstand/law360/article/1882683/?crid=48cd5145-0817-47a7-bf22-1fb3bf01cb5f

[7] Jonathan Capriel, Missouri Ban on Some Psychoactive Foods to Hit Sept. 1 (August 30, 2024, 8:47 PM), https://plus.lexis.com/newsstand/law360/article/1882683/?crid=48cd5145-0817-47a7-bf22-1fb3bf01cb5f;  Rebecca Rivas, Missouri Hemp Leaders File Suit to Halt Governor’s Ban on Hemp THC Products, Missouri Independent (August 30, 2024 5:55 AM), https://missouriindependent.com/2024/08/30/missouri-hemp-leaders-set-to-file-suit-to-halt-governors-ban-on-hemp-thc-products/

[8] Sophie Nieto-Munoz, Gov. Murphy Signs Controversial Bill Restricting Sales of Hemp Products, New Jersey Monitor (Sept. 13, 2024, 7:11 AM), https://newjerseymonitor.com/2024/09/13/gov-murphy-signs-controversial-bill-restricting-sales-of-hemp-products/

[9] Rae Ann Varona, Calif. Gov.’s Emergency Hemp Intoxicant Ban Wins Approval, Law360 (Sept. 24, 2024, 9:49 PM),  https://plus.lexis.com/newsstand/law360/article/1882121/?crid=642ddd2e-a29d-46d6-8ff4-b7f209fd6c7f&cbc=0,0

[10] Same Reisman, Wyden Pitches New Bill To Regulate Intoxicating Hemp, Law360 (Sept. 25, 2024, 7:06 PM), https://plus.lexis.com/newsstand/law360/article/1882226/?crid=ed53b57f-dd97-4a6a-8a89-f6028f95e523

[11] Nationwide Children’s, New Study Finds Increase in Exposures to Synthetic Tetrahydrocannabinols Among Young Children, Teens, and Adults, Nationwide Children’s Hospital (May 7, 2024), https://www.nationwidechildrens.org/newsroom/news-releases/2024/05/deltathc_clinicaltoxicology

[12] Id.

[13] Varona, supra note 9.

[14] World Health Organization, Cannabidiol (CBD) Critical Review Report 5 (2018).

[15] Stacy Weiner, CBD: Does It Work? Is It Safe? Is It Legal?, AAMC News (July 20, 2023), https://www.aamc.org/news/cbd-does-it-work-it-safe-it-legal

[16] Reisman, supra note 6; Varona, supra note 9.

[17] U.S. Const. art. VI, cl. 2


NEPA and Climate Change: Are Environmental Protections Hindering Renewable Energy Development?

Samuel Taylor, MJLST Staffer

The National Environmental Protection Act, or “NEPA”, has been essential in protecting America’s air and water, managing health hazards, and preserving environmental integrity. For decades, environmental activist groups, the government, and regular citizens relied on and benefitted from enforcing these NEPA against those looking to pollute, poison, or endanger Americans and their environment. NEPA, however, is proving to be less suitable for addressing the country’s  imminent environmental challenge: climate change. As proponents of green energy scramble to ditch fossil fuels, NEPA and its procedural requirements are accused of delaying or halting renewable energy projects. Environmental protection laws remain essential to stopping the dangers they were passed to stop, and many new green energy projects pose additional risks to the environment, but we also need to transition away from fossil fuels as fast as possible to avoid the worst consequences of climate change. The conflict between the need to address climate change and the need to maintain environmental protections has created a regulatory challenge that may not have a perfect solution.

Enacted in 1970, NEPA was the first major environmental protection measure taken in the US.[i] The “magna carta” of environmental laws applies to all “major federal actions significantly affecting the environment”.[ii] Major federal actions can include everything from infrastructure projects like proposed dams, bridges, highways, and pipelines, to housing developments, research projects, and wildlife management plans.[iii] Before a federal agency can act, there are a series of procedures they must follow which force them to consider the environmental impacts of the potential action. These procedures involve community outreach, the effects of past and future actions in the region, and providing the public with a detailed explanation of the agency’s findings, and often take years to fully complete.[iv] By requiring the government to follow these procedures “to the fullest extent possible,” NEPA aims to ensure that environmental concerns are given sufficient consideration before any harmful actions are taken.[v] Notably, NEPA is not a results-oriented statute, but a process-oriented one.[vi] No agency decision can be made until after its procedures are followed, but once they are, NEPA does not mandate a particular decision.[vii] NEPA does not even require that environmental concerns be given more weight than any other factors.[viii] Nevertheless, if an agency fails to properly follow NEPA procedures, all resulting decisions can be invalidated if challenged in the courts.[ix]

Though passing NEPA was the first step Congress took towards addressing environmental concerns, and decades of NEPA success stories have followed, there is growing concern about its  ability to adapt to the pressing challenges presented by global climate change.[x] NEPA, critics say, drastically slows the government’s ability to invest in green energy because each step of the procedure can be challenged in court.[xi] Corporate competitors in the renewable energy sector, environmental interest groups, concerned citizen groups, and Native American tribes have all challenged various projects’ compliance with NEPA requirements.[xii] Many of these groups have legitimate concerns about the projects, and NEPA allows them to stall or halt development while the government is forced to further consider their potential environmental impacts. This causes direct conflict between these valid concerns and efforts to reverse the country’s reliance on fossil fuels.[xiii] Collectively, the long procedures and potential legal challenges that accompany NEPA’s requirements present serious hurdles to the production of green energy.

Legal experts disagree, perhaps not surprisingly, over the extent to which NEPA hinders the production of green energy sources. Some groups believe the rhetoric surrounding NEPA’s deficiencies is an exaggeration, citing data that shows only a very small percentage of green energy projects actually require the production of EISs.[xiv] Others present NEPA and other environmental protection laws as serious hurdles preventing the production of renewable energy at the pace we need to avoid the worst effects of climate change.[xv] They argue that this data is not properly representative of all clean energy projects, ignores the delays caused by litigation, and does not properly account for the likelihood that delays will get worse in the future.[xvi] Because there is little consensus regarding the extent of the problem, there is likewise almost no agreement on a potential solution.

 Lawmakers and legal scholars have proposed a range of approaches to the NEPA problem. Most drastically, a bill introduced to the U.S. House Committee on Natural Resources by Representative Bruce Westerman would largely eradicate most NEPA provisions by limiting consideration of new scientific evidence, allowing some projects to go exempt from NEPA’s requirements, and drastically limiting community instigated judicial review.[xvii] Other proposals are more modest, including permitting reform to favor green energy projects, placing some limits on judicial review, and collecting more comprehensive data on NEPA issues.[xviii] Still others are staunchly against most reforms, arguing that weakening any NEPA provisions would open the door for greater environmental abuses.[xix] The differing opinions on the scope of the problem and the wide range of proposed solutions amount to a problem that will not be easy to solve.

The legal community is divided on the efficacy of existing NEPA regulations that have, for decades, promoted environmental protection. In the face of climate change and the accompanying need for renewable energy, it must be determined whether NEPA is truly hindering the switch to green energy. The United States must build more renewable energy infrastructure if we are to avoid the worst consequences of global climate change, but with concern growing that our own environmental protection laws are hindering progress, it will be challenging to move forward in a manner that balances the need for green energy production against the necessity of strong environmental protection laws.

 

Notes

[i] Sam Kalen, NEPA’s Trajectory: Our Waning Environmental Charter From Nixon to Trump, 50 Environmental Law Reporter 10398, 10398 (2020).

[ii] Id.; Mark A. Chertok, Overview of the National Environmental Policy Act: Environmental Impact Assessments and Alternatives (2021); 42 U.S.C. §§ 4321–70.

[iii] Elly Pepper, Never Eliminate Public Advice: NEPA Success Stories, Natural Resources Defense Council (Feb. 1, 2015), https://www.nrdc.org/resources/never-eliminate-public-advice-nepa-success-stories#:~:text=The%20NEPA%20process%20has%20saved,participated%20in%20important%20federal%20decisions.

[iv] Chertok, supra note ii; 42 U.S.C. §§ 4321–70.

[v] Chertok, supra note ii; Catron County v. U.S.F.W.S., 75 F.3d 1429, 1437 (10th Cir. 1996).

[vi] Chertok, supra note ii; Catron County at 1434.

[vii] Chertok, supra note ii.

[viii] Balt. Gas & Elec. Co. v. Nat. Res. Def. Council, Inc., 462 U.S. 87, 97 (1983).

[ix] Chertok, supra note ii (citing Lands Council v. Powell, 395 F.3d 1019, 1027 (9th Cir. 2005)).

[x] Pepper, supra note iii; Aidan Mackenzie & Santi Ruiz, No, NEPA Really Is a Problem for Clean Energy, Institute For Progress (Aug. 17, 2023), https://ifp.org/no-nepa-really-is-a-problem-for-clean-energy/#nepa-will-harm-clean-energy-projects-even-more-in-the-future; Darian Woods & Adrian Ma, Environmental Laws Can Be an Obstancel in Building Green Energy Infrastructure, NPR (Apr. 13, 2022), https://www.npr.org/2022/04/13/1092686675/environmental-laws-can-be-an-obstacle-in-building-green-energy-infrastructure.

[xi] Mackenzie & Ruiz, supra note x; See, e.g. Ocean Advocates v. U.S. Army Corps of Engineers, 402 F.3d 846 (9th Cir. 2005) (where the agency finding of no significant impact was challenged by an environmental protection group); Sierra Club v. Bosworth, 510 F.3d 1016 (9th Cir. 2007) (where the agency’s EIS analysis was challenged by the Sierra Club).

[xii] Niina H. Farah, Tribes Sue Over NEPA Review for Oregon Offshore Wind Auction, Politico (Sep. 18, 2024), https://www.eenews.net/articles/tribes-sue-over-nepa-review-for-oregon-offshore-wind-auction/; Christine Billy, Update: Congestion Pricing: A Case Study on Interstate Air Pollution Disputes, New York State Bar Association (Sep. 23, 2024), https://nysba.org/update-congestion-pricing-a-case-study-on-interstate-air-pollution-disputes/; Jonathan D. Brightbill & Madalyn Brown Feiger, Environmental Challenges Seek to Block Renewable Projects, Winston & Strawn LLP (Sep. 1, 2021), https://www.winston.com/en/blogs-and-podcasts/winston-and-the-legal-environment/environmental-challenges-seek-to-block-renewable-projects.

[xiii] Farah, supra note xii; Brightbill & Feiger, supra note xiv.

[xiv] Ann Alexander, Renewable Energy and Environmental Protection Is Not an Either/Or, Natural Resources Defense Council (Jan. 18, 2024), https://www.nrdc.org/bio/ann-alexander/renewable-energy-and-environmental-protection-not-eitheror.

[xv] Mackenzie & Ruiz, supra note x.

[xvi] Alexander, supra note xiv; Mackenzie & Ruiz, supra note x.

[xvii] Defenders of Wildlife, Defenders Slams Bill Aiming to Rollback NEPA and Gut Environmental Protections, (Sep. 10, 2024), https://defenders.org/newsroom/defenders-slams-bill-aiming-rollback-nepa-and-gut-environmental-protections.

[xviii] Brian Potter, Arnab Datta & Alec Stapp, How to Stop Environmental Review from harming the Environment, Institute For Progress (Sep. 13, 2022), https://ifp.org/environmental-review/.

[xix] Alexander, supra note xiv; Sierra Club

 

 

 

 


Are AI Overviews Creating New Risk of Libel for Search Engines?

Eleanor Nagel-Bennett, MJLST Staffer

47 USC § 230 of the Communications Decency Act (“CDA”) protects online service providers from civil liability for content published on their servers by third parties. Essentially, it clarifies that if a Google search for one’s name produced a link to a blog post containing false and libelous content about that person, the falsely accused searcher could pursue a claim of defamation against the publisher of the blog, but not against Google. Under § 230, Google is not considered the speaker or the publisher of the libelous statements on the blog, despite returning the libelous results on the search engine results page. Specifically, § 230 provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” for purposes of civil penalties.[i]

However, in May 2024 Google rolled out an “AI Overview” feature on their search engine. The AI Overview is powered by Gemini, Google’s generative artificial intelligence chatbot.[ii] Gemini generates a response to each google search by combining information from internet sources and writing a complete overview answer to the search query using “multi-step reasoning, planning and multimodality” functions.[iii] After submitting a query, the AI Overview is displayed at the top of the search results. In the first few weeks, Google’s AI Overview suffered from hallucinations producing “odd and erroneous” outputs.[iv] Some of the odd results were obviously false, such as suggesting a user try adhering cheese to their pizza with a glue stick.[v]

Besides going viral online, the silly results were largely inconsequential. However, there were also several more serious reports of Google’s AI Overview feature generating misinformation that presented responses more difficult to identify as false. One such result claimed that President Barack Obama was the first Muslim President, a popular but demonstrably false conspiracy theory that has circulated the internet for years, while another told users that certain poisonous mushrooms were safe for human consumption.[vi] Google has since changed the data pool used to produce AI Overviews, and now rarely produces blatantly false results — but is rarely enough when 8.5 billion searches are run per day on Google?[vii]

This raises the question: can search engines be held liable for libelous content published by their generative AI? A plaintiff will have to prove to the court that § 230 of the Communications Decency Act is not a statutory bar to claims against generative AI. A recent consensus of legal scholars anticipate courts will likely find that the CDA would not bar claims against a company producing libelous content through generative AI because content produced by generative AI is original work, “authored” by the AI itself.[viii]

For an illustrative comparison, consider how defamation claims against journalists work as compared to defamation claims against traditional search engine results. While a journalist may write stories based on interviews, research, and experience, the language she publishes are her own words, her own creation, and she can be held liable for them despite sourcing some pieces from other speakers. Traditional search engines on the other hand historically post the sourced material directly to the reader, so they are not the “speaker” and therefore are insulated from defamation claims.  Enter generative AI, the output of which is likely to be considered original work by courts, and that insulation may erode.[ix] Effectively, introducing an AI Overview feature waives the statutory bar to claims under § 230 of the CDA relied upon by search engines to avoid liability for defamation claims.

But even without an outright statutory bar to defamation claims against a search engine’s libelous AI output, there is disagreement over whether generative AI output in general is relied upon seriously enough by humans to give rise to a defamation claim. Some believe that AI generated text should not be interpreted as a reasonably perceived factual claim, and therefore argue that AI generated content cannot give rise to a claim for defamation.[x] This is where the legitimacy of a result displayed on a popular search engine comes into play. Even if AI generated text is not ordinarily reasonably perceived as a factual claim, when displayed at the top of a search engine’s results page, more weight and authority is given to the result, though users might otherwise be wary of AI outputs.[xi]

While no landmark case law on the liability of an AI machine for libelous output has been developed to date, several lawsuits have already been filed on the question of liability assignment for libelous content produced by generative AI, including at least one case against a search engine for AI generated output displayed on a search engine results page.[xii]

Despite the looming potential for consequences, most AI companies have neglected to give attention to the risk of libel created by the operation of generative AI.[xiii] While all AI companies should pay attention to the risks, search engines previously insulated from civil liability by § 230 of the CDA should be especially wary of just how much liability they may be opening themselves up to by including an AI Overview on their results pages.

 

Notes

[i] 47 U.S.C. §230(c)(1).

[ii] Reid, Liz, Generative AI in Search: Let Google do the searching for you, Google (May 14, 2024) https://blog.google/products/search/generative-ai-google-search-may-2024/.

[iii] Id.

[iv] Reid, Liz, AI Overviews: About last week, Google (May 30, 2024) https://blog.google/products/search/ai-overviews-update-may-2024/.

[v] O’Brien, Matt, Google makes fixes to AI-generated search summaries after outlandish answers went viral, The Associated Press (May 30, 2024) https://apnews.com/article/google-ai-overviews-hallucination-33060569d6cc01abe6c63d21665330d8.

[vi] Id.

[vii] Brannon, Jordan, Game-Changing Google Search Statistics for 2024, Coalition, (Apr. 5, 2024) https://coalitiontechnologies.com/blog/game-changing-google-search-statistics-for-2024.

[viii] Joel Simon, Can AI be sued for defamation?, Col. Journalism Rev. (March 18, 2024).

[ix] Id.

[x]  See Eugene Volokh, Large Libel Models? Liability For AI Output, 3 J. Free Speech L. 489, 498 (2023).

[xi] Id.

[xii] In July of 2023, Jeffery Battle of Maryland filed suit against Microsoft for an AI generated search result on BING accusing him of crimes he did not commit. The Plaintiff Jeffery Battle is a veteran, business owner, and aerospace professor. When his name is searched online, however, Bing’s AI overview accuses Battle of crimes committed by a different Jeffrey Battle, Jeffery Leon Battle. The other Jeffery Battle pled guilty to seditious conspiracy and levying war against the United States after he tried to join the Taliban in the wake of 9/11. Bing’s search engine results page overview powered by Chat GPT combines information about the two Jeffery’s into one. See Id. at 492.

[xiii] Id. at 493.


Social Media Platforms Won’t “Like” This: How Aggrieved Users Are Circumventing the Section 230 Shield

Claire Carlson, MJLST Staffer

 

Today, almost thirty years after modern social media platforms were introduced, 93% of teens use social media on a daily basis.[1] On average, teens spend nearly five hours a day on social media platforms, with a third reporting that they are “almost constantly” active on one of the top five leading platforms.[2] As social media usage has surged, concerns have grown among users, parents, and lawmakers about its impacts on teens, with primary concerns including cyberbullying, extremism, eating disorders, mental health problems, and sex trafficking.[3] In response, parents have brought a number of lawsuits against social media companies alleging the platforms market to children, connect children with harmful content and individuals, and fail to take the steps necessary to keep children safe.[4]

 

When facing litigation, social media companies often invoke the immunity granted to them under Section 230 of the Communications Decency Act.[5] 47 U.S.C § 230 states, in relevant part, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[6] Federal courts are generally in consensus and interpret the statutory language as providing broad immunity for social media providers.[7] Application of this interpretive framework establishes that social media companies can only be held liable for content they author, whereas Section 230 shields them from liability for harm arising from information or content posted by third-party users of their platforms.[8]

 

In V.V. v. Meta Platforms, Inc., plaintiffs alleged that popular social media platform Snapchat intentionally encourages use by minors and consequently facilitated connections between their twelve-year-old daughter and sex offenders, leading to her assault.[9] The court held that the facts of this case fell squarely within the intended scope of Section 230, as the harm alleged was the result of the content and conduct of third-party platform users, not Snapchat.[10] The court expressed that Section 230 precedent required it to deny relief to the plaintiffs, whose specific circumstances evoked outrage, asserting it lacked judicial authority to do otherwise without legislative action.[11] Consequently, the court held that Section 230 shielded Snapchat from liability for the harm caused by the third-party platform users and that plaintiffs’ only option for redress was to bring suit against the third-party users directly.[12]

 

After decades of cases like V.V., where Section 230 has shielded social media companies from liability, plaintiffs are taking a new approach rooted in tort law. While Section 230 provides social media companies immunity from harm caused by their users, it does not shield them from liability for harm caused by their own platforms and algorithms.[13] Accordingly, plaintiffs are trying to bypass the Section 230 shield with product liability claims alleging that social media companies knowingly, and often intentionally, design defective products aimed at fostering teen addiction.[14] Many of these cases analogize social media companies to tobacco companies – maintaining that they are aware of the risks associated with their products and deliberately conceal them.[15] These claims coincide with the U.S. Surgeon General and 40+ attorney generals imploring Congress to pass legislation mandating warning labels on social media platforms emphasizing the risk of teen addiction and other negative health impacts.[16]

Courts stayed tort addiction cases and postponed rulings last year in anticipation of the Supreme Court ruling on the first Section 230 immunity cases to come before it.[17] In companion cases, Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh, the Supreme Court was expected to shed light on the scope of Section 230 immunity by deciding whether social media companies are immune from liability when the platform’s algorithm recommends content that causes harm.[18] In both, the court declined to answer the Section 230 question and decided the cases on other grounds.[19]

 

Since then, while claims arising from third-party content are continuously dismissed, social media addiction cases have received positive treatment in both state and federal courts.[20] In a federal multidistrict litigation (MDL) proceeding, the presiding judge permitted hundreds of addiction cases alleging defective product (platform and algorithm) design to move forward. In September, the MDL judge issued a case management order, which suggests an early 2026 trial date.[21] Similarly, a California state judge found that Section 230 does not shield social media companies from liability in hundreds of addiction cases, as the alleged harms are based on the company’s design and operation of their platforms, not the content on them.[22] Thus, social media addiction cases are successfully using tort law to bypass Section 230 where their predecessor cases failed.

 

With hundreds of pending social media cases and the Supreme Court’s silence on the scope of Section 230 immunity, the future of litigating and understanding social media platform liability is uncertain.[23] However, the preliminary results seen in state and federal courts evinces that Section 230 is not the infallible immunity shield that social media companies have grown to rely on.

 

Notes

 

[1] Leon Chaddock, What Percentage of Teens Use Social Media? (2024), Sentiment (Jan. 11, 2024), https://www.sentiment.io/how-many-teens-use-social-media/#:~:text=Surveys%20suggest%20that%20over%2093,widely%20used%20in%20our%20survey. In the context of this work, the term “teens” refers to people aged 13-17.

[2] Jonathan Rothwell, Teens Spend Average of 4.8 Hours on Social Media Per Day, Gallup (Oct. 13, 2023), https://news.gallup.com/poll/512576/teens-spend-average-hours-social-media-per-day.aspx; Monica Anderson, Michelle Faverio & Jeffrey Gottfried, Teens, Social Media and Technology 2023, Pew Rsch. Ctr. (Dec. 11, 2023), https://www.pewresearch.org/internet/2023/12/11/teens-social-media-and-technology-2023/.

[3] Chaddock, supra note 1; Ronald V. Miller, Social Media Addiction Lawsuit, Lawsuit Info. Ctr. (Sept. 20, 2024), https://www.lawsuit-information-center.com/social-media-addiction-lawsuits.html#:~:text=Social%20Media%20Companies%20May%20Claim,alleged%20in%20the%20addiction%20lawsuits.

[4] Miller, supra note 3.

[5] Tyler Wampler, Social Media on Trial: How the Supreme Court Could Permanently Alter the Future of the Internet by Limiting Section 230’s Broad Immunity Shield, 90 Tenn. L. Rev. 299, 311–13 (2023).

[6] 47 U.S.C. § 230 (2018).

[7] V.V. v. Meta Platforms, Inc., No. X06UWYCV235032685S, 2024 WL 678248, at *8 (Conn. Super. Ct. Feb. 16, 2024) (citing Brodie v. Green Spot Foods, LLC, 503 F. Supp. 3d 1, 11 (S.D.N.Y. 2020)).

[8] V.V., 2024 WL 678248, at *8; Poole v. Tumblr, Inc., 404 F. Supp. 3d 637, 641 (D. Conn. 2019).

[9] V.V., 2024 WL 678248, at *2.

[10] V.V., 2024 WL 678248, at *11.

[11] V.V., 2024 WL 678248, at *11.

[12] V.V., 2024 WL 678248, at *7, 11.

[13] Miller, supra note 3.

[14] Miller, supra note 3; Isaiah Poritz, Social Media Addiction Suits Take Aim at Big Tech’s Legal Shield, BL (Oct. 25, 2023), https://www.bloomberglaw.com/bloomberglawnews/tech-and-telecom-law/X2KNICTG000000?bna_news_filter=tech-and-telecom-law#jcite.

[15] Kirby Ferguson, Is Social Media Big Tobacco 2.0? Suits Over the Impact on Teens, Bloomberg (May 14, 2024), https://www.bloomberg.com/news/videos/2024-05-14/is-social-media-big-tobacco-2-0-video.

[16] Miller, supra note 3.

[17] Miller, supra note 3; Wampler, supra note 5, at 300, 321; In re Soc. Media Adolescent Addiction/Pers. Inj. Prod. Liab. Litig., 702 F. Supp. 3d 809, 818 (N.D. Cal. 2023) (“[T]he Court was awaiting the possible impact of the Supreme Court’s decision in Gonzalez v. Google. Though that case raised questions regarding the scope of Section 230, the Supreme Court ultimately did not reach them.”).

[18] Wampler, supra note 5, at 300, 339-46; Leading Case, Twitter, Inc. v. Taamneh, 137 Harv. L. Rev. 400, 409 (2023).

[19] Twitter, Inc. v. Taamneh, 598 U.S. 471, 505 (2023) (holding that the plaintiff failed to plausibly allege that defendants aided and abetted terrorists); Gonzalez v. Google LLC, 598 U.S. 617, 622 (2023) (declining to address Section 230 because the plaintiffs failed to state a plausible claim for relief).

[20] Miller, supra note 3.

[21] Miller, supra note 3; 702 F. Supp. at 809, 862.

[22] Miller, supra note 3; Poritz supra note 14.

[23] Leading Case, supra note 18, at 400, 409.


You Can Protect Your Data . . . Once.

Jessica Schmitz, MJLST Staffer

We scan our face to access our phones. We scan our hands to save five minutes in the TSA line. Teslas track our eyes to ensure we’re watching the road.[1] Our biometric data is constantly being collected by private entities. Though states like California and Texas are attempting to implement new safeguards for its constituents, Illinois recently rolled back protections under its renowned Biometric Information Privacy Act (BIPA).[2] BIPA  protected consumers from private entities that deceptively or illegally collected biometric data.[3] The new rules overturned the Illinois Supreme Court ruling in Cothron v. White Castle System Inc. that allowed claims to accrue for each violation under BIPA’s provisions.[4] While tech companies and liability insurers are no doubt breathing a sigh of relief at the new reforms, litigants going forward may be left without a remedy if their biometric data is mishandled more than once. Below is a history of BIPA’s passing and impact, followed by the likely ramifications of the new reforms.

BIPA’s Passing Was an Early Victory for Data Privacy Protections

BIPA’s passing in 2008 was one of the earliest consumer protection laws for biometric data collection. At that time, major corporations were piloting finger scanning and facial recognition technology in major cities, including Chicago. The law was designed to not only provide recourse for consumers, but also prescribed preventative measures for companies to follow. BIPA’s protections are broad; companies must publish its data collection and retention policies to the public and cannot retain the information it collects for more than three years.[5] Companies must inform users that they are collecting the data, disclose what is being collected, disclose why it’s being collected, and for how long it intends to store the data.[6] Companies cannot disclose someone’s biometric data without express consent, nor can they profit from the data in any way.[7] Lastly, the data must be stored at least as well as a company stores other confidential data.[8]

Unlike laws in other states, BIPA provided a private right of action to enforce data privacy protections. Following its passage, swaths of lawsuits were filed against major corporations, including Amazon, Southwest Airlines, Google, and Facebook.[9] Under BIPA, companies could be liable for purchasing, improperly collecting, improperly storing, or disseminating biometric data, even if the data was not mishandled.[10] Plaintiffs could recover for every violation under BIPA, and could do so without stating an injury or alleging damages.[11] It is no surprise that BIPA class actions tended to favor plaintiffs, often resulting in large settlements or jury verdicts.[12] Since litigants could collect damages on every violation of BIPA’s provisions, it was difficult for companies to assess their potential liability. Every member of a class action could allege multiple violations, and if found liable, companies would owe, at minimum, $1,000 per violation. The lack of predictability often pushed corporate liability insurance policies into settling rather than risk such large payouts.

The 2023 ruling in Cothron implored the legislature to address concerns of disproportionate corporate liability, stating, “We respectfully suggest that the legislature . . . make clear its intent regarding the assessment of damages under the Act.”[13] The legislature rose to the challenge, fearing the court’s interpretation could bankrupt smaller or mid-size companies.[14] The new provisions to BIPA target the Court’s ruling, providing:

“For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.
(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. (eff. 8-2-24.)”

Though not left completely without redress, Illinois constituents may now recover only once if their biometric data is recklessly or deceptively collected or disseminated in the same manner.

BIPA Reforms Mark a Trend Towards Laxing Corporate Responsibility

The rollback of BIPA’s provisions come at a time when consumers need it most. The stakes for mishandling biometric data are much higher than that of other collected data. While social security numbers and credit card numbers can be canceled and changed – with varying degrees of ease – most constituents would be unwilling to change their faces and fingerprints for the sake of _____.[15] Ongoing and future technology developments, such as the rise of AI, heightens potential fallout from BIPA violations.  AI-generated deepfakes are becoming more prevalent, targeting both major celebrities like Taylor Swift and Pokimane, and our family members through phishing schemes.[16] These crimes rely on biometric data, utilizing our voices and faces to create realistic depictions of people, and can even recreate our speech cadence and body movements.[17] For victims, recovering on a per-person basis instead of a per-violation basis means they could be further harmed after recovering against a company with no redress.

Corporations, however, have been calling for reforms for year, and believe that these changes will reduce insurance premiums and docket burdens.[18] Prior to the changes, insurers began removing BIPA coverage from litigation insurance plans and adding strict requirements for defense coverage.[19] Insurers also would encourage companies to settle to avoid judgements on a per-violation basis.[20]

Advocates for BIPA reform believe the new changes will reduce insurance costs while still providing litigants with fair outcomes. Though individual litigants may only recover once, they can still recover for actual damages if a company’s actions resulted in more harm than simply violating BIPA’s provisions.  Awards on a per-person basis can still result in hefty settlements or awards that will hold companies accountable for wrongdoing. Instead of stifling corporate accountability, proponents believe the reforms will result in fairer settlements and reduce litigation costs overall.

Without further guidance from the legislature, how the new provisions are applied will be left for state and federal courts to interpret. Specifically, the legislature left one looming question unanswered; do the restrictions apply retroactively? If litigants can only recover from an entity once, are past litigants barred from participating in future actions regarding similar violations? Or do they get one last shot at holding companies accountable? If they lost in a prior suit, can they join a new one? In trying to relieve the court system, the legislature has ironically given courts the loathsome task of interpreting BIPA’s vague new provisions. Litigants and defendants will likely fight tooth and nail to create favorable case law, which is unlikely to be uniform across jurisdictions.

 

Notes

[1] Model Y Owner’s Manual: Cabin Camera, Tesla, https://www.tesla.com/ownersmanual/modely/en_us/GUID-EDAD116F-3C73-40FA-A861-68112FF7961F.html (last visited Sept. 16, 2024).

[2] See generally, California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 (West 2018); Capture or Use of Biometric Identifier, Tex. Code Ann. § 503.001 (2017); Abraham Gross, Illinois Biometric Privacy Reform Eases Coverage Woes, LexisNexis Law360 (Aug. 8, 2024, 7:13 PM), https://plus.lexis.com/newsstand/law360-insurance-authority/article/1868014/?crid=debb3ba9-22a1-41d6-920e-c1ce2b7a108d&cbc=0,0,0.

[3] Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14/5 (2024) [hereinafter BIPA].

[4] Cothron v. White Castle System, Inc., 216 N.E.3d 918, 924 (Ill. 2023).

[5] BIPA, supra note 3, at sec. 15a.

[6] Id. at sec. 15b.

[7] Id. at sec. 15c-d.

[8] Id. at sec. 15e.

[9] See generally, In re Facebook Biometric Info. Priv. Litig., No. 3:15-CV-03747-JD, 2018 WL 2197546 (N.D. Cal. May 14, 2018); Rivera v. Google Inc., 238 F.Supp.3d 1088 (N.D.Ill., 2017); Miller v. S.W. Airlines Co., No. 18 C 86, 2018 WL 4030590 (N.D. Ill. Aug. 23, 2018), aff’d, 926 F.3d 898 (7th Cir. 2019).

[10] BIPA, supra note 3, at sec. 15.

[11] Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197, 1206 (Ill. 2019).

[12] See, Lauraann Wood, $9M White Castle Fingerprint BIPA Deal Clears Final Approval, LexisNexis Law360 (Aug. 1, 2024, 2:18 PM) https://www.law360.com/articles/1864687?from_lnh=true; Lauraann Wood, BNSF’s $75M BIPA Deal With Truckers Nears Final OK, LexisNexis Law360 (June 17, 2024, 8:54 AM) https://www.law360.com/articles/1848754?from_lnh=true.

[13] Cothron, 216 N.E.3d at 929 (Ill. 2023).

[14] Updates to Illinois’ Biometric Privacy Signed Into Law Thanks to Cunningham, Office of Bill Cunningham: State Senator, https://www.senatorbillcunningham.com/news/508-updates-to-illinois-biometric-privacy-signed-into-law-thanks-to-cunningham (Aug. 2, 2024, 3:13PM).

[15] See, BIPA, supra note 3, at sec. 5c.

[16] Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement Is His Latest Embrace Of AI-Generated Images, AP News (Aug. 20, 2024, 3:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f; Bianca Britton, They Appeared in Deepfake Porn Videos Without Their Consent. Few Laws Protect Them, NBC News (Feb. 14, 2023, 2:48 PM), https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker (Mar. 7, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[17] Catherine Bernaciak & Dominic A. Ross, How Easy is it to Make and Detect a Deepfake?, Carnegie Mellon Univ.: SEI Blog (Mar. 14, 2022), https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/.

[18] Michael C. Andolina et. al., Emerging Issues and Ambiguities Under Illinois’ Biometric Information Privacy Act, Practitioner Insights Commentaries (May 21, 2020), https://1.next.westlaw.com/Document/Ib04759309b7b11eabea3f0dc9fb69570/View/FullText.html?listSource=Foldering&originationContext=clientid&transitionType=MyResearchHistoryItem&contextData=%28oc.Default%29&VR=3.0&RS=cblt1.0.

[19] Gross, supra note 2.

[20] Id.


Moderating Social Media Content: A Comparative Analysis of European Union and United States Policy

Jaxon Hill, MJLST Staffer

In the wake of the Capitol Hill uprising, former president Donald Trump had several of his social media accounts suspended.1 Twitter explained that their decision to suspend Trump’s account was “due to the risk of further incitement of violence.”2 Though this decision caught a lot of attention in the public eye, Trump was not the first figure in the political sphere to have his account suspended.3 In response to the social media platforms alleged censorship, some states, mainly Florida and Texas, attempted to pass anti-censorship laws which limit the ability for social media companies to moderate content.4 

Now, as litigation ensues for Trump and social media companies fighting the Texas and Florida legislation, the age-old question rears its ugly head: what is free speech?5 Do social media companies have a right to limit free speech? Social media companies are not bound by the First Amendment.6 Thus, barring valid legislation that says otherwise, they are allowed to restrict or moderate content on their platforms. But should they, and, if so, how? How does the answer to these questions differ for public officials on social media? To analyze these considerations, it is worthwhile to look beyond the borders of the United States. This analysis is not meant to presuppose that there is any wrongful conduct on the part of social media companies. Rather, this serves as an opportunity to examine an alternative option to social media content moderation that could provide more clarity to all interested parties. 

  In the European Union, social media companies are required to provide clear and specific information whenever they restrict the content on their platform.7 These statements are called “Statements of Reasons” (“SoRs”) and they must include some reference to whatever law the post violated.8 All SoRs  are  made publicly available to ensure transparency between the users and the organization.9 

An analysis of these SoRs yielded mixed results as to their efficacy but it opened up the door for potential improvements.10 Ultimately, the analysis showed inconsistencies among the various platforms in how or why they moderate content, but those inconsistencies can potentially open up an ability for legislators to clarify social media guidelines.11 

Applying this same principle domestically could allow for greater transparency between consumers, social media companies, and the government. By providing publicly available rationale behind any moderation, social media companies could continue to remove illegal content while not straddling the line of censorship. It is worth noting that there are likely negative financial implications for this policy, though. With states potentially implementing vastly different policies, social media companies may have to increase costs to ensure they are in compliance wherever they operate.12 Nevertheless, absorbing these costs up front may be preferable to “censorship” or “extremism, hatred, [or] misinformation and disinformation.”13 

In terms of the specific application to government officials, it may seem this alternative fails to offer any clarity to the current state of affairs. This assertion may have some merit as government officials have still been able to post harmful social media content in the EU without it being moderated.14 With that being said, politicians engaging with social media is a newer development—domestically and internationally—so more research needs to be conducted to conclude best practices. Regardless, increasing transparency should bar social media companies from making moderation choices unfounded in the law.

 

Notes

1 Bobby Allyn & Tamara Keith, Twitter Permanently Suspends Trump, Citing ‘Risk Of Further Incitement Of Violence’, Npr (Jan. 8, 2021), https://www.npr.org/2021/01/08/954760928/twitter-bans-president-trump-citing-risk-of-further-incitement-of-violence.

2 Id.

3 See Christian Shaffer, Deplatforming Censorship: How Texas Constitutionally Barred Social Media Platform Censorship, 55 Tex. Tech L. Rev. 893, 903-04 (2023) (giving an example of both conservative and liberal users that had their accounts suspended).

4 See Daveed Gartenstein-Ross et al., Anti-Censorship Legislation: A Flawed Attempt to Address a Legitimate Problem, Lawfare (July 27, 2022), https://www.lawfaremedia.org/article/anti-censorship-legislation-flawed-attempt-address-legitimate-problem (explaining the Texas and Florida legislation in-depth).

5 See e.g. Trump v. United States, 219 L.E.2d 991, 1034 (2024) (remanding the case to the lower courts); Moody v. NetChoice, LLC, 219 L.E.2d. 1075, 1104 (2024) (remanding the case to the lower courts).

6 Evelyn Mary Aswad, Taking Exception to Assessments of American Exceptionalism: Why the United States Isn’t Such an Outlier on Free Speech, 126 Dick. L. R. 69, 72 (2021).

7 Chiara Drolsbach & Nicolas Pröllochs, Content Moderation on Social Media in the EU: Insights From the DSA Transparency Database (2023), https://arxiv.org/html/2312.04431v1/#bib.bib56.

8  Id.

9 Id.

10 Id. This analysis showed that (1) content moderation varies across platforms in number, (2) content moderation is most often applied to videos and text, whereas images are moderated much less, (3) most rule-breaking content is decided via automated means (except X), (4) there is much variation among how often the platforms choose to moderate illegal content, and (5) the primary reasons for moderation include falling out of the scope of the platform’s services, illegal or harmful speech, and sexualized content. Misinformation was very rarely cited as the reason for moderation.

11 Id.

12 Perkins Coie LLP, More State Content Moderation Laws Coming to Social Media Platforms (November 17, 2022), https://perkinscoie.com/insights/update/more-state-content-moderation-laws-coming-social-media-platforms (recommending social media companies to hire counsel to ensure they are complying with various state laws). 

13 See e.g. Shaffer, supra note 3 (detailing the harms of censorship); Gartenstein-Ross, supra note 4 (outlining the potential harms of restrictive content moderation).

14 Goujard et al., Europe’s Far Right Uses TikTok to Win Youth Vote, Politico (Mar. 17, 2024), https://www.politico.eu/article/tiktok-far-right-european-parliament-politics-europe/ (“Without evidence, [Polish far-right politician, Patryk Jaki] insinuated the person who carried out the attack was a migrant”).

 


An Incomplete Guide to Ethically Integrating AI Into Your Legal Practice

Kevin Frazier, Assistant Professor, Benjamin L. Crump College of Law, St. Thomas University

There is no AI exception in the Model Rules of Professional Conduct and corresponding state rules. Lawyers must proactively develop an understanding of the pros and cons of AI tools. This “practice guide” provides some early pointers for how to do just that—specifically, how to use AI tools while adhering to Model Rule 3.1.​

Model Rule 3.1, in short, mandates that lawyers only bring claims with substantial and legitimate basis in law and fact. This Rule becomes particularly relevant when using AI tools like ChatGPT in your legal research and drafting. On seemingly a daily basis, we hear of a lawyer misusing an AI tool and advancing a claim that is as real as Jack’s beanstalk.

The practice guide emphasizes the need for lawyers to independently verify the outputs from AI tools before relying on them in legal arguments. Such diligence ensures compliance with both MRPC 3.1 and the Federal Rule of Civil Procedure 11, which also discourages frivolous filings. Perhaps more importantly, it also saves the profession from damaging headlines that imply we’re unwilling to do our homework when it comes to learning the ins and outs of AI.

With those goals in mind, the guide offers a few practical steps to safely incorporate AI tools into legal workflows:

  1. Understand the AI Tool’s Function and Limitations: Knowing what the AI can and cannot do is crucial to avoiding reliance on inaccurate legal content.
  2. Independently Verify AI Outputs: Always cross-check AI-generated citations and arguments with trustworthy legal databases or resources.
  3. Document AI-Assisted Processes: Keeping a detailed record of how AI tools were used and verified can be crucial in demonstrating diligence and compliance with ethical standards.

The legal community, specifically bar associations, is actively exploring how to refine ethical rules to better accommodate AI tools. This evolving process necessitates that law students and practitioners stay informed about both technological advancements and corresponding legal ethics reforms.

For law students stepping into this rapidly evolving landscape, understanding how to balance innovation with ethical practice is key. The integration of AI in legal processes is not just about leveraging new tools but doing so in a way that upholds the integrity of the legal profession.


A Digital Brick in the Trump-Biden Wall

Solomon Steen, MJLST Staffer

“Alexander explained to a CBP officer at the limit line between the U.S. and Mexico that he was seeking political asylum and refuge in the United States; the CBP officer told him to “get the fuck out of here” and pushed him backwards onto the cement, causing bruising. Alexander has continued to try to obtain a CBP One appointment every day from Tijuana. To date, he has been unable to obtain a CBP One appointment or otherwise access the U.S. asylum process…”>[1]

Alexander fled kidnapping and threats in Chechnya to seek security in the US.[2] His is a common story of migrants who have received a similar welcome. People have died and been killed waiting for an appointment to apply for asylum at the border.[3] Children with autism and schizophrenia have had to wait, exposed to the elements.[4] People whose medical vulnerabilities should have entitled them to relief have instead been preyed upon by gangs or corrupt police.[5] What is the wall blocking these people from fleeing persecution and reaching safety in the US?

The Biden administration’s failed effort to pass bipartisan legislation to curb access to asylum is part of a broader pattern of Trump-Biden continuity in immigration policy.[6] This continuity is defined by bipartisan support for increased funding for Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) for enforcement of immigration law at the border and in the interior, respectively.[7] Successive Democratic and Republican administrations have increased investment in interior and border enforcement.[8] That investment has expanded technological mechanisms to surveil migrants and facilitate administration of removal.

As part of their efforts to curtail access to asylum, the Biden administration promulgated their Circumvention of Lawful Pathways rule.[9] This rule revived the Trump administration’s entry and transit bans.[10] The transit ban bars migrants from applying for asylum if they crossed through a third country en route to the US.[11] The entry ban bars asylum applicants who did not present themselves at a port of entry.[12] In East Bay Sanctuary Covenant v. Biden, the Ninth Circuit determined the rule was unlawful for directly contradicting Congressional intent in the INA granting a right of asylum to any migrant in the US regardless of manner of entry.[13] The Trump entry ban was similarly found unlawful for directly contravening the same language in the INA.[14] The Biden ban remains in effect to allow litigation regarding its legality to reach its ultimate conclusion.

The Circumvention of Lawful Pathways rule effecting the entry ban gave rise to a pattern and practice of metering asylum applicants, or requiring applicants to present at a port of entry having complied with specific conditions to avoid being turned back.[15] To facilitate the arrival of asylum seekers within a specific appointment window, DHS launched the CBP One app.[16] The app would ostensibly allow asylum applicants to schedule an appointment at a port of entry to present themselves for asylum.[17]

Al Otro Lado (AOL), Haitian Bridge, and other litigants have filed a complaint alleging the government lacks the statutory authorization to force migrants to seek an appointment through the app and that its design frustrates their rights.[18] AOL notes that by requiring migrants to make appointments to claim asylum via the app, the Biden administration has imposed a number of extra-statutory requirements on migrants entitled to claim asylum, which include that they:

(a) have access to an up-to-date, well-functioning smartphone;
(b) fluently read one of the few languages currently supported by CBP One;
(c) have access to a sufficiently strong and reliable mobile internet connection and electricity to submit the necessary information and photographs required by the app;
(d) have the technological literacy to navigate the complicated multi-step process to create an account and request an appointment via CBP One;
(e) are able to survive in a restricted area of Mexico for an indeterminate period of time while trying to obtain an appointment; and
(f) are lucky enough to obtain one of the limited number of appointments at certain POEs.[19]

The Civil Rights Education and Enforcement Center (CREEC) and the Texas Civil Rights Project have similarly filed a complaint with Department of Homeland Security’s Office of Civil Rights and Civil Liberties alleging CBP One is illegally inaccessible to disabled people and this has consequently violated other rights they have as migrants.[20] Migrants may become disabled as a consequence of the immigration process or the persecution they suffered that establish their prima facie claim to asylum.[21] The CREEC complaint specifically cites Section 508 of the Rehabilitation Act, which says disabled members of the public must enjoy access to government tech “comparable to the access” of everyone else.[22]

CREEC and AOL – and the other service organizations joining their respective complaints – note that they have limited capacity to assist asylum seekers.[23] Migrants without such institutional or community support would be more vulnerable being denied access to asylum and subject to opportunistic criminal predation while they wait at the border.[24]

There are a litany of technical problems with the app that can frustrate meritorious asylum claims. The app requires applicants to submit a picture of their face.[25] The app’s facial recognition software frequently fails to identify portraits of darker-skinned people.[26] Racial persecution is one of the statutory grounds for claiming asylum.[27] A victim of race-based persecution can have their asylum claim frustrated on the basis of their race because of this app. Persecution on the basis of membership in a particular social group can also form the basis for an asylum claim.[28] An applicant could establish membership in a particular social group composed of certain disabled people.[29] People with facial disabilities have also struggled with the facial recognition feature.[30]

The mere fact that an app has substituted a human interaction contributes to frustration of disabled migrants’ statutory rights. Medically fragile people statutorily eligible to enter the US via humanitarian parole are unable to access that relief electronically.[31] Individuals with intellectual disabilities have also had their claims delayed by navigating CBP One.[32] Asylum officers are statutorily required to evaluate if asylum seekers lack the mental competence to assist in their applications and, if so, ensure they have qualified assistance to vindicate their claims.[33]

The entry ban has textual exceptions for migrants whose attempts to set appointments are frustrated by technical issues.[34] CBP officials at many ports have a pattern and practice of ignoring those exceptions and refusing all migrants who lack a valid CBP One appointment.[35]

AOL seeks relief in the termination of the CBP One turnback policy: essentially, ensuring people can exercise their statutory right to claim asylum at the border without an appointment.[36] CREEC seeks relief in the form of a fully accessible CBP One app and accommodation policies to ensure disabled asylum seekers can have “meaningful access” to the asylum process.[37]

Comprehensively safeguarding asylum seeker’s rights would require more than abandoning CBP One. A process that ensures medically vulnerable persons can access timely care and persons with intellectual disabilities can get legal assistance would require deploying more border resources, such as co-locating medical and resettlement organization staff with CBP. Meaningfully curbing racial, ethnic, and linguistic discrimination by CBP, ICE, and Asylum Officers would require expensive and extensive retraining. However, it is evident that the CBP One is not serving the ostensible goal of making the asylum process more efficient, though it may serve the political goal of reinforcing the wall.

Notes

[1] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[2] Id. at 46.

[3] Ana Lucia Verduzco & Stephanie Brewer, Kidnapping of Migrants and Asylum Seekers at the Texas-Tamaulipas Border Reaches Intolerable Levels, (Apr. 4, 2024) https://www.wola.org/analysis/kidnapping-migrants-asylum-seekers-texas-tamaulipas-border-intolerable-levels.

[4] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 28, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[5] Linda Urueña Mariño & Christina Asencio, Human Rights First Tracker of Reported Attacks During the Biden Administration Against Asylum Seekers and Migrants Who Are Stranded in and/or Expelled to Mexico, Human Rights First, (Jan. 13, 2022),  at 10, 16, 19, https://humanrightsfirst.org/wp-content/uploads/2022/02/AttacksonAsylumSeekersStrandedinMexicoDuringBidenAdministration.1.13.2022.pdf.

[6] Actions – H.R.815 – 118th Congress (2023-2024): National Security Act, 2024, H.R.815, 118th Cong. (2024), https://www.congress.gov/bill/118th-congress/house-bill/815/all-actions, (failing to pass the immigration language on 02/07/24).

[7] American Immigration Council,The Cost of Immigration Enforcement and Border Security, (Jan. 20, 2021), at 2, https://www.americanimmigrationcouncil.org/sites/default/files/research/the_cost_of_immigration_enforcement_and_border_security.pdf.

[8] Id. at 3-4.

[9] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[10] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[11] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[12] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[13] Id. at 669-70.

[14] E. Bay Sanctuary Covenant v. Trump, 349 F. Supp. 3d 838, 844.

[15] Complaint, at 2, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[16] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[17] Id.

[18] Complaint, at 57, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[19] Complaint, at 3, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[20] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[21] Ruby Ritchin, “I Felt Not Seen, Not Heard”: Gaps in Disability Access at USCIS for People Seeking Protection, 12, (Sep. 19, 2023) https://humanrightsfirst.org/library/i-felt-not-seen-not-heard-gaps-in-disability-access-at-uscis-for-people-seeking-protection.

[22] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 6, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[23] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also Complaint, at 4, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[24] Dara Lind, CBP’s Continued ‘Turnbacks’ Are Sending Asylum Seekers Back to Lethal Danger, (Aug. 10, 2023), https://immigrationimpact.com/2023/08/10/cbp-turnback-policy-lawsuit-danger.

[25] Complaint, at 31, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[26] Id.

[27] 8 U.S.C.A. § 1101(a)(42)(A) (West).

[28] Id.

[29] Hernandez Arellano v. Garland, 856 F. App’x 351, 353 (2d Cir. 2021).

[30] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 9, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[31] Id.

[32] Id.

[33] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[34] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[35] Id. at 23.

[36] Id. at 65-66.

[37] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 10-11, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.


The Stifling Potential of Biden’s Executive Order on AI

Christhy Le, MJLST Staffer

Biden’s Executive Order on “Safe, Secure, and Trustworthy” AI

On October 30, 2023, President Biden issued a landmark Executive Order to address concerns about the burgeoning and rapidly evolving technology of AI. The Biden administration states that the order’s goal is to ensure that America leads the way in seizing the promising potential of AI while managing the risks of AI’s potential misuse.[1] The Executive Order establishes (1) new standards for AI development, and security; (2) increased protections for Americans’ data and privacy; and (3) a plan to develop authentication methods to detect AI-generated content.[2] Notably, Biden’s Executive Order also highlights the need to develop AI in a way that ensures it advances equity and civil rights, fights against algorithmic discrimination, and creates efficiencies and equity in the distribution of governmental resources.[3]

While the Biden administration’s Executive Order has been lauded as the most comprehensive step taken by a President to safeguard against threats posed by AI, its true impact is yet to be seen. The impact of the Executive Order will depend on its implementation by the agencies that have been tasked with taking action. The regulatory heads tasked with implementing Biden’s Executive Order are the Secretary of Commerce, Secretary of Energy, Secretary of Homeland Security, and the National Institute of Standards and Technology.[4] Below is a summary of the key calls-to-action from Biden’s Executive Order:

  • Industry Standards for AI Development: The National Institute of Science and Tech (NIST), Secretary of Commerce, Secretary of Energy, Secretary of Homeland Secretary, and other heads of agencies selected by the Secretary of Commerce will define industry standards and best practices for the development and deployment of safe and secure AI systems.
  • Red-Team Testing and Reporting Requirements: Companies developing or demonstrating an intent to develop potential dual-use foundational models will be required to provide the Federal Government, on an ongoing basis, with information, reports, and records on the training and development of such models. Companies will also be responsible for sharing the results of any AI red-team testing conducted by the NIST.
  • Cybersecurity and Data Privacy: The Department of Homeland Security shall provide an assessment of potential risks related to the use of AI in critical infrastructure sectors and issue a public report on best practices to manage AI-specific cybersecurity risks. The Director of the National Science Foundation shall fund the creation of a research network to advance privacy research and the development of Privacy Enhancing Technologies (PETs).
  • Synthetic Content Detection and Authentication: The Secretary of Commerce and heads of other relevant agencies will provide a report outlining existing methods and the potential development of further standards/techniques to authenticate content, track its provenance, detect synthetic content, and label synthetic content.
  • Maintaining Competition and Innovation: The government will invest in AI research by creating at least four new National AI Research Institutes and launch a pilot distributing computational, data, model, and training resources to support AI-related research and development. The Secretary of Veterans Affairs will also be tasked with hosting nationwide AI Tech Sprint competitions. Additionally, the FTC will be charged with using its authorities to ensure fair competition in the AI and semiconductor industry.
  • Protecting Civil Rights and Equity with AI: The Secretary of Labor will publish a report on effects of AI on the labor market and employees’ well-being. The Attorney General shall implement and enforce existing federal laws to address civil rights and civil liberties violations and discrimination related to AI. The Secretary of Health and Human Services shall publish a plan to utilize automated or algorithmic systems in administering public benefits and services and ensure equitable distribution of government resources.[5]

Potential for Big Tech’s Outsized Influence on Government Action Against AI

Leading up to the issuance of this Executive Order, the Biden administration met repeatedly and exclusively with leaders of big tech companies. In May 2023, President Biden and Vice President Kamala Harris met with the CEOs of leading AI companies–Google, Anthropic, Microsoft, and OpenAI.[6] In July 2023, the Biden administration celebrated their achievement of getting seven AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI) to make voluntary commitments to work towards developing AI technology in a safe, secure, and transparent manner.[7] Voluntary commitments generally require tech companies to publish public reports on their developed models, submit to third-party testing of their systems, prioritize research on societal risks posed by AI systems, and invest in cybersecurity.[8] Many industry leaders criticized these voluntary commitments for being vague and “more symbolic than substantive.”[9] Industry leaders also noted the lack of enforcement mechanisms to ensure companies follow through on these commitments.[10] Notably, the White House has only allowed leaders of large tech companies to weigh in on requirements for Biden’s Executive Order.

While a bipartisan group of senators[11] hosted a more diverse audience of tech leaders in their AI Insights Forum, the attendees for the first and second forum were still largely limited to CEOs or Cofounders of prominent tech companies, VC executives, or professors at leading universities.[12] Marc Andreessen, a co-founder of Andreessen Horowitz, a prominent VC fund, noted that in order to protect competition, the “future of AI shouldn’t be dictated by a few large corporations. It should be a group of global voices, pooling together diverse insights and ethical frameworks.”[13] On November 3rd, 2023 a group of prominent academics, VC executives, and heads of AI startups published an open letter to the Biden administration where they voiced their concern about the Executive Order’s potentially stifling effects.[14] The group also welcomed a discussion with the Biden administration on the importance of developing regulations that allowed for robust development of open source AI.[15]

Potential to Stifle Innovation and Stunt Tech Startups

While the language of Biden’s Executive Order is fairly broad and general, it still has the potential to stunt early innovation by smaller AI startups. Industry leaders and AI startup founders have voiced concern over the Executive Order’s reporting requirements and restrictions on models over a certain size.[16] Ironically, Biden’s Order includes a claim that the Federal Trade Commission will “work to promote a fair, open, and competitive ecosystem” by helping developers and small businesses access technical resources and commercialization opportunities.

Despite this promise of providing resources to startups and small businesses, the Executive Order’s stringent reporting and information-sharing requirements will likely have a disproportionately detrimental impact on startups. Andrew Ng, a longtime AI leader and cofounder of Google Brain and Coursera, stated that he is “quite concerned about the reporting requirements for models over a certain size” and is worried about the “overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”[17] Ng believes that regulating AI model size will likely hurt the open-source community and unintentionally benefit tech giants as smaller companies will struggle to comply with the Order’s reporting requirements.[18]

Open source software (OSS) has been around since the 1980s.[19] OSS is code that is free to access, use, and change without restriction.[20] The open source community has played a central part in developing the use and application of AI, as leading AI generative models like ChatGPT and Llama have open-source origins.[21] While both Llama and ChatGPT are no longer open source, their development and advancement heavily relied on using open source models like Transformer, TensorFlow, and Pytorch.[22] Industry leaders have voiced concern that the Executive Order’s broad and vague use of the term “dual-use foundation model” will impose unduly burdensome reporting requirements on small companies.[23] Startups typically have leaner teams, and there is rarely a team solely dedicated to compliance. These reporting requirements will likely create barriers to entry for tech challengers who are pioneering open source AI, as only incumbents with greater financial resources will be able to comply with the Executive Order’s requirements.

While Biden’s Executive Order is unlikely to bring any immediate change, the broad reporting requirements outlined in the Order are likely to stifle emerging startups and pioneers of open source AI.

Notes

[1] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[2] Id.

[3] Id.

[4] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[5] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[6] https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.

[7] https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.

[8] https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.

[9] https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html.

[10] Id.

[11] https://www.heinrich.senate.gov/newsroom/press-releases/read-out-heinrich-convenes-first-bipartisan-senate-ai-insight-forum.

[12] https://techpolicy.press/us-senate-ai-insight-forum-tracker/.

[13] https://www.schumer.senate.gov/imo/media/doc/Marc%20Andreessen.pdf.

[14] https://twitter.com/martin_casado/status/1720517026538778657?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1720517026538778657%7Ctwgr%5Ec9ecbf7ac4fe23b03d91aea32db04b2e3ca656df%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcointelegraph.com%2Fnews%2Fbiden-ai-executive-order-certainly-challenging-open-source-ai-industry-insiders.

[15] Id.

[16] https://www.cnbc.com/2023/11/02/biden-ai-executive-order-industry-civil-rights-labor-groups-react.html.

[17] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[18] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[19] https://www.brookings.edu/articles/how-open-source-software-shapes-ai-policy/.

[20] Id.

[21] https://www.zdnet.com/article/why-open-source-is-the-cradle-of-artificial-intelligence/.

[22] Id.

[23] Casado, supra note 14.