Artificial Intelligence

Are AI Overviews Creating New Risk of Libel for Search Engines?

Eleanor Nagel-Bennett, MJLST Staffer

47 USC § 230 of the Communications Decency Act (“CDA”) protects online service providers from civil liability for content published on their servers by third parties. Essentially, it clarifies that if a Google search for one’s name produced a link to a blog post containing false and libelous content about that person, the falsely accused searcher could pursue a claim of defamation against the publisher of the blog, but not against Google. Under § 230, Google is not considered the speaker or the publisher of the libelous statements on the blog, despite returning the libelous results on the search engine results page. Specifically, § 230 provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” for purposes of civil penalties.[i]

However, in May 2024 Google rolled out an “AI Overview” feature on their search engine. The AI Overview is powered by Gemini, Google’s generative artificial intelligence chatbot.[ii] Gemini generates a response to each google search by combining information from internet sources and writing a complete overview answer to the search query using “multi-step reasoning, planning and multimodality” functions.[iii] After submitting a query, the AI Overview is displayed at the top of the search results. In the first few weeks, Google’s AI Overview suffered from hallucinations producing “odd and erroneous” outputs.[iv] Some of the odd results were obviously false, such as suggesting a user try adhering cheese to their pizza with a glue stick.[v]

Besides going viral online, the silly results were largely inconsequential. However, there were also several more serious reports of Google’s AI Overview feature generating misinformation that presented responses more difficult to identify as false. One such result claimed that President Barack Obama was the first Muslim President, a popular but demonstrably false conspiracy theory that has circulated the internet for years, while another told users that certain poisonous mushrooms were safe for human consumption.[vi] Google has since changed the data pool used to produce AI Overviews, and now rarely produces blatantly false results — but is rarely enough when 8.5 billion searches are run per day on Google?[vii]

This raises the question: can search engines be held liable for libelous content published by their generative AI? A plaintiff will have to prove to the court that § 230 of the Communications Decency Act is not a statutory bar to claims against generative AI. A recent consensus of legal scholars anticipate courts will likely find that the CDA would not bar claims against a company producing libelous content through generative AI because content produced by generative AI is original work, “authored” by the AI itself.[viii]

For an illustrative comparison, consider how defamation claims against journalists work as compared to defamation claims against traditional search engine results. While a journalist may write stories based on interviews, research, and experience, the language she publishes are her own words, her own creation, and she can be held liable for them despite sourcing some pieces from other speakers. Traditional search engines on the other hand historically post the sourced material directly to the reader, so they are not the “speaker” and therefore are insulated from defamation claims.  Enter generative AI, the output of which is likely to be considered original work by courts, and that insulation may erode.[ix] Effectively, introducing an AI Overview feature waives the statutory bar to claims under § 230 of the CDA relied upon by search engines to avoid liability for defamation claims.

But even without an outright statutory bar to defamation claims against a search engine’s libelous AI output, there is disagreement over whether generative AI output in general is relied upon seriously enough by humans to give rise to a defamation claim. Some believe that AI generated text should not be interpreted as a reasonably perceived factual claim, and therefore argue that AI generated content cannot give rise to a claim for defamation.[x] This is where the legitimacy of a result displayed on a popular search engine comes into play. Even if AI generated text is not ordinarily reasonably perceived as a factual claim, when displayed at the top of a search engine’s results page, more weight and authority is given to the result, though users might otherwise be wary of AI outputs.[xi]

While no landmark case law on the liability of an AI machine for libelous output has been developed to date, several lawsuits have already been filed on the question of liability assignment for libelous content produced by generative AI, including at least one case against a search engine for AI generated output displayed on a search engine results page.[xii]

Despite the looming potential for consequences, most AI companies have neglected to give attention to the risk of libel created by the operation of generative AI.[xiii] While all AI companies should pay attention to the risks, search engines previously insulated from civil liability by § 230 of the CDA should be especially wary of just how much liability they may be opening themselves up to by including an AI Overview on their results pages.

 

Notes

[i] 47 U.S.C. §230(c)(1).

[ii] Reid, Liz, Generative AI in Search: Let Google do the searching for you, Google (May 14, 2024) https://blog.google/products/search/generative-ai-google-search-may-2024/.

[iii] Id.

[iv] Reid, Liz, AI Overviews: About last week, Google (May 30, 2024) https://blog.google/products/search/ai-overviews-update-may-2024/.

[v] O’Brien, Matt, Google makes fixes to AI-generated search summaries after outlandish answers went viral, The Associated Press (May 30, 2024) https://apnews.com/article/google-ai-overviews-hallucination-33060569d6cc01abe6c63d21665330d8.

[vi] Id.

[vii] Brannon, Jordan, Game-Changing Google Search Statistics for 2024, Coalition, (Apr. 5, 2024) https://coalitiontechnologies.com/blog/game-changing-google-search-statistics-for-2024.

[viii] Joel Simon, Can AI be sued for defamation?, Col. Journalism Rev. (March 18, 2024).

[ix] Id.

[x]  See Eugene Volokh, Large Libel Models? Liability For AI Output, 3 J. Free Speech L. 489, 498 (2023).

[xi] Id.

[xii] In July of 2023, Jeffery Battle of Maryland filed suit against Microsoft for an AI generated search result on BING accusing him of crimes he did not commit. The Plaintiff Jeffery Battle is a veteran, business owner, and aerospace professor. When his name is searched online, however, Bing’s AI overview accuses Battle of crimes committed by a different Jeffrey Battle, Jeffery Leon Battle. The other Jeffery Battle pled guilty to seditious conspiracy and levying war against the United States after he tried to join the Taliban in the wake of 9/11. Bing’s search engine results page overview powered by Chat GPT combines information about the two Jeffery’s into one. See Id. at 492.

[xiii] Id. at 493.


An Incomplete Guide to Ethically Integrating AI Into Your Legal Practice

Kevin Frazier, Assistant Professor, Benjamin L. Crump College of Law, St. Thomas University

There is no AI exception in the Model Rules of Professional Conduct and corresponding state rules. Lawyers must proactively develop an understanding of the pros and cons of AI tools. This “practice guide” provides some early pointers for how to do just that—specifically, how to use AI tools while adhering to Model Rule 3.1.​

Model Rule 3.1, in short, mandates that lawyers only bring claims with substantial and legitimate basis in law and fact. This Rule becomes particularly relevant when using AI tools like ChatGPT in your legal research and drafting. On seemingly a daily basis, we hear of a lawyer misusing an AI tool and advancing a claim that is as real as Jack’s beanstalk.

The practice guide emphasizes the need for lawyers to independently verify the outputs from AI tools before relying on them in legal arguments. Such diligence ensures compliance with both MRPC 3.1 and the Federal Rule of Civil Procedure 11, which also discourages frivolous filings. Perhaps more importantly, it also saves the profession from damaging headlines that imply we’re unwilling to do our homework when it comes to learning the ins and outs of AI.

With those goals in mind, the guide offers a few practical steps to safely incorporate AI tools into legal workflows:

  1. Understand the AI Tool’s Function and Limitations: Knowing what the AI can and cannot do is crucial to avoiding reliance on inaccurate legal content.
  2. Independently Verify AI Outputs: Always cross-check AI-generated citations and arguments with trustworthy legal databases or resources.
  3. Document AI-Assisted Processes: Keeping a detailed record of how AI tools were used and verified can be crucial in demonstrating diligence and compliance with ethical standards.

The legal community, specifically bar associations, is actively exploring how to refine ethical rules to better accommodate AI tools. This evolving process necessitates that law students and practitioners stay informed about both technological advancements and corresponding legal ethics reforms.

For law students stepping into this rapidly evolving landscape, understanding how to balance innovation with ethical practice is key. The integration of AI in legal processes is not just about leveraging new tools but doing so in a way that upholds the integrity of the legal profession.


The Stifling Potential of Biden’s Executive Order on AI

Christhy Le, MJLST Staffer

Biden’s Executive Order on “Safe, Secure, and Trustworthy” AI

On October 30, 2023, President Biden issued a landmark Executive Order to address concerns about the burgeoning and rapidly evolving technology of AI. The Biden administration states that the order’s goal is to ensure that America leads the way in seizing the promising potential of AI while managing the risks of AI’s potential misuse.[1] The Executive Order establishes (1) new standards for AI development, and security; (2) increased protections for Americans’ data and privacy; and (3) a plan to develop authentication methods to detect AI-generated content.[2] Notably, Biden’s Executive Order also highlights the need to develop AI in a way that ensures it advances equity and civil rights, fights against algorithmic discrimination, and creates efficiencies and equity in the distribution of governmental resources.[3]

While the Biden administration’s Executive Order has been lauded as the most comprehensive step taken by a President to safeguard against threats posed by AI, its true impact is yet to be seen. The impact of the Executive Order will depend on its implementation by the agencies that have been tasked with taking action. The regulatory heads tasked with implementing Biden’s Executive Order are the Secretary of Commerce, Secretary of Energy, Secretary of Homeland Security, and the National Institute of Standards and Technology.[4] Below is a summary of the key calls-to-action from Biden’s Executive Order:

  • Industry Standards for AI Development: The National Institute of Science and Tech (NIST), Secretary of Commerce, Secretary of Energy, Secretary of Homeland Secretary, and other heads of agencies selected by the Secretary of Commerce will define industry standards and best practices for the development and deployment of safe and secure AI systems.
  • Red-Team Testing and Reporting Requirements: Companies developing or demonstrating an intent to develop potential dual-use foundational models will be required to provide the Federal Government, on an ongoing basis, with information, reports, and records on the training and development of such models. Companies will also be responsible for sharing the results of any AI red-team testing conducted by the NIST.
  • Cybersecurity and Data Privacy: The Department of Homeland Security shall provide an assessment of potential risks related to the use of AI in critical infrastructure sectors and issue a public report on best practices to manage AI-specific cybersecurity risks. The Director of the National Science Foundation shall fund the creation of a research network to advance privacy research and the development of Privacy Enhancing Technologies (PETs).
  • Synthetic Content Detection and Authentication: The Secretary of Commerce and heads of other relevant agencies will provide a report outlining existing methods and the potential development of further standards/techniques to authenticate content, track its provenance, detect synthetic content, and label synthetic content.
  • Maintaining Competition and Innovation: The government will invest in AI research by creating at least four new National AI Research Institutes and launch a pilot distributing computational, data, model, and training resources to support AI-related research and development. The Secretary of Veterans Affairs will also be tasked with hosting nationwide AI Tech Sprint competitions. Additionally, the FTC will be charged with using its authorities to ensure fair competition in the AI and semiconductor industry.
  • Protecting Civil Rights and Equity with AI: The Secretary of Labor will publish a report on effects of AI on the labor market and employees’ well-being. The Attorney General shall implement and enforce existing federal laws to address civil rights and civil liberties violations and discrimination related to AI. The Secretary of Health and Human Services shall publish a plan to utilize automated or algorithmic systems in administering public benefits and services and ensure equitable distribution of government resources.[5]

Potential for Big Tech’s Outsized Influence on Government Action Against AI

Leading up to the issuance of this Executive Order, the Biden administration met repeatedly and exclusively with leaders of big tech companies. In May 2023, President Biden and Vice President Kamala Harris met with the CEOs of leading AI companies–Google, Anthropic, Microsoft, and OpenAI.[6] In July 2023, the Biden administration celebrated their achievement of getting seven AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI) to make voluntary commitments to work towards developing AI technology in a safe, secure, and transparent manner.[7] Voluntary commitments generally require tech companies to publish public reports on their developed models, submit to third-party testing of their systems, prioritize research on societal risks posed by AI systems, and invest in cybersecurity.[8] Many industry leaders criticized these voluntary commitments for being vague and “more symbolic than substantive.”[9] Industry leaders also noted the lack of enforcement mechanisms to ensure companies follow through on these commitments.[10] Notably, the White House has only allowed leaders of large tech companies to weigh in on requirements for Biden’s Executive Order.

While a bipartisan group of senators[11] hosted a more diverse audience of tech leaders in their AI Insights Forum, the attendees for the first and second forum were still largely limited to CEOs or Cofounders of prominent tech companies, VC executives, or professors at leading universities.[12] Marc Andreessen, a co-founder of Andreessen Horowitz, a prominent VC fund, noted that in order to protect competition, the “future of AI shouldn’t be dictated by a few large corporations. It should be a group of global voices, pooling together diverse insights and ethical frameworks.”[13] On November 3rd, 2023 a group of prominent academics, VC executives, and heads of AI startups published an open letter to the Biden administration where they voiced their concern about the Executive Order’s potentially stifling effects.[14] The group also welcomed a discussion with the Biden administration on the importance of developing regulations that allowed for robust development of open source AI.[15]

Potential to Stifle Innovation and Stunt Tech Startups

While the language of Biden’s Executive Order is fairly broad and general, it still has the potential to stunt early innovation by smaller AI startups. Industry leaders and AI startup founders have voiced concern over the Executive Order’s reporting requirements and restrictions on models over a certain size.[16] Ironically, Biden’s Order includes a claim that the Federal Trade Commission will “work to promote a fair, open, and competitive ecosystem” by helping developers and small businesses access technical resources and commercialization opportunities.

Despite this promise of providing resources to startups and small businesses, the Executive Order’s stringent reporting and information-sharing requirements will likely have a disproportionately detrimental impact on startups. Andrew Ng, a longtime AI leader and cofounder of Google Brain and Coursera, stated that he is “quite concerned about the reporting requirements for models over a certain size” and is worried about the “overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”[17] Ng believes that regulating AI model size will likely hurt the open-source community and unintentionally benefit tech giants as smaller companies will struggle to comply with the Order’s reporting requirements.[18]

Open source software (OSS) has been around since the 1980s.[19] OSS is code that is free to access, use, and change without restriction.[20] The open source community has played a central part in developing the use and application of AI, as leading AI generative models like ChatGPT and Llama have open-source origins.[21] While both Llama and ChatGPT are no longer open source, their development and advancement heavily relied on using open source models like Transformer, TensorFlow, and Pytorch.[22] Industry leaders have voiced concern that the Executive Order’s broad and vague use of the term “dual-use foundation model” will impose unduly burdensome reporting requirements on small companies.[23] Startups typically have leaner teams, and there is rarely a team solely dedicated to compliance. These reporting requirements will likely create barriers to entry for tech challengers who are pioneering open source AI, as only incumbents with greater financial resources will be able to comply with the Executive Order’s requirements.

While Biden’s Executive Order is unlikely to bring any immediate change, the broad reporting requirements outlined in the Order are likely to stifle emerging startups and pioneers of open source AI.

Notes

[1] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[2] Id.

[3] Id.

[4] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[5] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[6] https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.

[7] https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.

[8] https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.

[9] https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html.

[10] Id.

[11] https://www.heinrich.senate.gov/newsroom/press-releases/read-out-heinrich-convenes-first-bipartisan-senate-ai-insight-forum.

[12] https://techpolicy.press/us-senate-ai-insight-forum-tracker/.

[13] https://www.schumer.senate.gov/imo/media/doc/Marc%20Andreessen.pdf.

[14] https://twitter.com/martin_casado/status/1720517026538778657?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1720517026538778657%7Ctwgr%5Ec9ecbf7ac4fe23b03d91aea32db04b2e3ca656df%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcointelegraph.com%2Fnews%2Fbiden-ai-executive-order-certainly-challenging-open-source-ai-industry-insiders.

[15] Id.

[16] https://www.cnbc.com/2023/11/02/biden-ai-executive-order-industry-civil-rights-labor-groups-react.html.

[17] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[18] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[19] https://www.brookings.edu/articles/how-open-source-software-shapes-ai-policy/.

[20] Id.

[21] https://www.zdnet.com/article/why-open-source-is-the-cradle-of-artificial-intelligence/.

[22] Id.

[23] Casado, supra note 14.


Regulating the Revolution: A Legal Roadmap to Optimizing AI in Healthcare

Fazal Khan, MD-JD: Nexbridge AI

In the field of healthcare, the integration of artificial intelligence (AI) presents a profound opportunity to revolutionize care delivery, making it more accessible, cost-effective, and personalized. Burgeoning demographic shifts, such as aging populations, are exerting unprecedented pressure on our healthcare systems, exacerbating disparities in care and already-soaring costs. Concurrently, the prevalence of medical errors remains a stubborn challenge. AI stands as a beacon of hope in this landscape, capable of augmenting healthcare capacity and access, streamlining costs by automating processes, and refining the quality and customization of care.

Yet, the journey to harness AI’s full potential is fraught with challenges, most notably the risks of algorithmic bias and the diminution of human interaction. AI systems, if fed with biased data, can become vehicles of silent discrimination against underprivileged groups. It is essential to implement ongoing bias surveillance, promote the inclusion of diverse data sets, and foster community involvement to avert such injustices. Healthcare institutions bear the responsibility of ensuring that AI applications are in strict adherence to anti-discrimination statutes and medical ethical standards.

Moreover, it is crucial to safeguard the essence of human touch and empathy in healthcare. AI’s prowess in automating administrative functions cannot replace the human art inherent in the practice of medicine—be it in complex diagnostic processes, critical decision-making, or nurturing the therapeutic bond between healthcare providers and patients. Policy frameworks must judiciously navigate the fine line between fostering innovation and exercising appropriate control, ensuring that technological advancements do not overshadow fundamental human values.

The quintessential paradigm would be one where human acumen and AI’s analytical capabilities coalesce seamlessly. While humans should steward the realms requiring nuanced judgment and empathic interaction, AI should be relegated to the execution of repetitive tasks and the extrapolation of data-driven insights. Placing patients at the epicenter, this symbiotic union between human clinicians and AI can broaden access to healthcare, reduce expenditures, and enhance service quality, all the while maintaining trust through unyielding transparency. Nonetheless, the realization of such a model mandates proactive risk management and the encouragement of innovation through sagacious governance. By developing governmental and institutional policies that are both cautious and compassionate by design, AI can indeed be the catalyst for a transformative leap in healthcare, enriching the dynamics between medical professionals and the populations they serve.


Conflicts of Interest and Conflicting Interests: The SEC’s Controversial Proposed Rule

Shaadie Ali, MJLST Staffer

A controversial proposed rule from the SEC on AI and conflicts of interest is generating significant pushback from brokers and investment advisers. The proposed rule, dubbed “Reg PDA” by industry commentators in reference to its focus on “predictive data analytics,” was issued on July 26, 2023.[1] Critics claim that, as written, Reg PDA would require broker-dealers and investment managers to effectively eliminate the use of almost all technology when advising clients.[2] The SEC claims the proposed rule is intended to address the potential for AI to hurt more investors more quickly than ever before, but some critics argue that the SEC’s proposed rule would reach far beyond generative AI, covering nearly all technology. Critics also highlight the requirement that conflicts of interest be eliminated or neutralized as nearly impossible to meet and a departure from traditional principles of informed consent in financial advising.[3]

The SEC’s 2-page fact sheet on Reg PDA describes the 239-page proposal as requiring broker-dealers and investment managers to “eliminate or neutralize the effect of conflicts of interest associated with the firm’s use of covered technologies in investor interactions that place the firm’s or its associated person’s interest ahead of investors’ interests.”[4] The proposal defines covered technology as “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes in an investor interaction.”[5] Critics have described this definition of “covered technology” as overly broad, with some going so far as to suggest that a calculator may be “covered technology.”[6] Despite commentators’ insistence, this particular contention is implausible – in its Notice of Proposed Rulemaking, the SEC stated directly that “[t]he proposed definition…would not include technologies that are designed purely to inform investors.”[7] More broadly, though, the SEC touts the proposal’s broadness as a strength, noting it “is designed to be sufficiently broad and principles-based to continue to be applicable as technology develops and to provide firms with flexibility to develop approaches to their use of technology consistent with their business model.”[8]

This move by the SEC comes amidst concerns raised by SEC chair Gary Gensler and the Biden administration about the potential for the concentration of power in artificial intelligence platforms to cause financial instability.[9] On October 30, 2023, President Biden signed an Executive Order that established new standards for AI safety and directed the issuance of guidance for agencies’ use of AI.[10] When questioned about Reg PDA at an event in early November, Gensler defended the proposed regulation by arguing that it was intended to protect online investors from receiving skewed recommendations.[11] Elsewhere, Gensler warned that it would be “nearly unavoidable” that AI would trigger a financial crisis within the next decade unless regulators intervened soon.[12]

Gensler’s explanatory comments have done little to curb criticism by industry groups, who have continued to submit comments via the SEC’s notice and comment process long after the SEC’s October 10 deadline.[13] In addition to highlighting the potential impacts of Reg PDA on brokers and investment advisers, many commenters questioned whether the SEC had the authority to issue such a rule. The American Free Enterprise Chamber of Commerce (“AmFree”) argued that the SEC exceeded its authority under both its organic statutes and the Administrative Procedures Act (APA) in issuing a blanket prohibition on conflicts of interest.[14] In their public comment, AmFree argued the proposed rule was arbitrary and capricious, pointing to the SEC’s alleged failure to adequately consider the costs associated with the proposal.[15] AmFree also invoked the major questions doctrine to question the SEC’s authority to promulgate the rule, arguing “[i]f Congress had meant to grant the SEC blanket authority to ban conflicts and conflicted communications generally, it would have spoken more clearly.”[16] In his scathing public comment, Robinhood Chief Legal and Corporate Affairs Officer Daniel M. Gallagher alluded to similar APA concerns, calling the proposal “arbitrary and capricious” on the grounds that “[t]he SEC has not demonstrated a need for placing unprecedented regulatory burdens on firms’ use of technology.”[17] Gallagher went on to condemn the proposal’s apparent “contempt for the ordinary person, who under the SEC’s apparent world view [sic] is incapable of thinking for himself or herself.”[18]

Although investor and broker industry groups have harshly criticized Reg PDA, some consumer protection groups have expressed support through public comment. The Consumer Federation of America (CFA) endorsed the proposal as “correctly recogniz[ing] that technology-driven conflicts of interest are too complex and evolve too quickly for the vast majority of investors to understand and protect themselves against, there is significant likelihood of widespread investor harm resulting from technology-driven conflicts of interest, and that disclosure would not effectively address these concerns.”[19] The CFA further argued that the final rule should go even further, citing loopholes in the existing proposal for affiliated entities that control or are controlled by a firm.[20]

More generally, commentators have observed that the SEC’s new prescriptive rule that firms eliminate or neutralize potential conflicts of interest marks a departure from traditional securities laws, wherein disclosure of potential conflicts of interest has historically been sufficient.[21] Historically, conflicts of interest stemming from AI and technology have been regulated the same as any other conflict of interest – while brokers are required to disclose their conflicts, their conduct is primarily regulated through their fiduciary duty to clients. In turn, some commentators have suggested that the legal basis for the proposed regulations is well-grounded in the investment adviser’s fiduciary duty to always act in the best interest of its clients.[22] Some analysts note that “neutralizing” the effects of a conflict of interest from such technology does not necessarily require advisers to discard that technology, but changing the way that firm-favorable information is analyzed or weighed, but it still marks a significant departure from the disclosure regime. Given the widespread and persistent opposition to the rule both through the note and comment process and elsewhere by commentators and analysts, it is unclear whether the SEC will make significant revisions to a final rule. While the SEC could conceivably narrow definitions of “covered technology,” “investor interaction,” and “conflicts of interest,” it is difficult to imagine how the SEC could modify the “eliminate or neutralize” requirement in a way that would bring it into line with the existing disclosure-based regime.

For its part, the SEC under Gensler is likely to continue pursuing regulations on AI regardless of the outcome of Reg PDA. Gensler has long expressed his concerns about the impacts of AI on market stability. In a 2020 paper analyzing regulatory gaps in the use of generative AI in financial markets, Gensler warned, “[e]xisting financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the risks posed by deep learning.”[23] Regardless of how the SEC decides to finalize its approach to AI in conflict of interest issues, it is clear that brokers and advisers are likely to resist broad-based bans on AI in their work going forward.

Notes

[1] Press Release, Sec. and Exch. Comm’n., SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Jul. 26, 2023).

[2] Id.

[3] Jennifer Hughes, SEC faces fierce pushback on plan to police AI investment advice, Financial Times (Nov. 8, 2023), https://www.ft.com/content/766fdb7c-a0b4-40d1-bfbc-35111cdd3436.

[4] Sec. Exch. Comm’n., Fact Sheet: Conflicts of Interest and Predictive Data Analytics (2023).

[5] Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers,  88 Fed. Reg. 53960 (Proposed Jul. 26, 2021) (to be codified at 17 C.F.R. pts. 240, 275) [hereinafter Proposed Rule].

[6] Hughes, supra note 3.

[7] Proposed Rule, supra note 5.

[8] Id.

[9] Stefania Palma and Patrick Jenkins, Gary Gensler urges regulators to tame AI risks to financial stability, Financial Times (Oct. 14, 2023), https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac.

[10] Fact Sheet, White House, President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (Oct. 30, 2023).

[11] Hughes, supra note 3.

[12] Palma, supra note 9.

[13] See Sec. Exch. Comm’n., Comments on Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (last visited Nov. 13, 2023), https://www.sec.gov/comments/s7-12-23/s71223.htm (listing multiple comments submitted after October 10, 2023).

[14] Am. Free Enter. Chamber of Com., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270180-652582.pdf.

[15] Id. at 14-19.

[16] Id. at 9.

[17] Daniel M. Gallagher, Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-271299-654022.pdf.

[18] Id. at 43.

[19] Consumer Fed’n. of Am., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270400-652982.pdf.

[20] Id.

[21] Ken D. Kumayama et al., SEC Proposes New Conflicts of Interest Rule for Use of AI by Broker-Dealers and Investment Advisers, Skadden (Aug. 10, 2023), https://www.skadden.com/insights/publications/2023/08/sec-proposes-new-conflicts.

[22] Colin Caleb, ANALYSIS: Proposed SEC Regs Won’t Allow Advisers to Sidestep AI, Bloomberg Law (Aug. 10, 2023), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-proposed-sec-regs-wont-allow-advisers-to-sidestep-ai.

[23] Gary Gensler and Lily Bailey, Deep Learning and Financial Stability (MIT Artificial Intel. Glob. Pol’y F., Working Paper 2020) (in which Gensler identifies several potential systemic risks to the financial system, including overreliance and uniformity in financial modeling, overreliance on concentrated centralized datasets, and the potential of regulators to create incentives for less-regulated entities to take on increasingly complex functions in the financial system).


Brushstroke Battles: Unraveling Copyright Challenges With AI Artistry

Sara Seid, MJLST Staffer

Introduction

Imagine this: after a long day of thinking and participating in society, you decided to curl up on the couch with your phone and crack open a new fanfiction to decompress.  Fanfiction, a fictional work of writing based on another fictional work, has increased in popularity due to the expansion and increased use of the internet. Many creators publish their works to websites like Archive of Our Own (AO3), or Tumblr. These websites are free and provide a community for creative minds to share their creative works. While the legality of fanfiction in general is debated, the real concern among creators is regarding AI-generated works. Original characters and works are being used for profit to “create” works through the use of Artificial Intelligence. Profits can be generated from fanfiction through the use of paid AI text generators to create written works, or through advertisements on platforms. What was once a celebration of favorite works has become tarnished through the theft of fanfiction by AI programs.

First Case to Address the Issue

Thaler v. Perlmutter is a new and instructive case on the issue of copyright and AI-generated creative works – namely artwork.[1] The action was brought by Stephen Thaler against the Copyright Office for denying his application for copyright due to the lack of human authorship.[2]  The D.C. Circuit court was the first to rule on whether AI-generated art can have copyright protections.[3] The court held that AI-created artwork could not be copyrighted.[4] In considering the plaintiff’s copyright registration application for “A Recent Entrance to Paradise,” the Register concluded that this particular work would not support a claim to copyright because the work “lacked human authorship and thus no copyright existed in the first instance.”[5] The plaintiff’s primary contention was that the artwork was produced by the computer program he created, and, through its AI capabilities, the product was his.[6]

The court went on to opine that copyright is designed to adapt with the times.[7] Underlying that adaptability, however, has been a “consistent understanding that human creativity is the sine qua non at the core of copyrightability,” even as that human creativity is channeled through new tools or into new media.[8] Therefore, despite the plaintiff’s creation of the computer program, the painting was not produced by a human, and not eligible for copyright. This opinion, while relevant and clear, still leaves unanswered questions regarding the extent to which humans are involved in AI-generated work.[9] What level of human involvement is necessary for an AI creation to qualify for copyright?[10] Is there a percentage to meet? Does the AI program require multiple humans to work on it as a prerequisite? Adaptability with the times, while essential, also means that there are new, developing questions about the right ways to address new technology and its capabilities.

Implications of the Case for Fanfiction

Artificial Intelligence is a new concern among scholars. While its accessibility and convenience create endless new possibilities for a multitude of careers, it also directly threatens creative professions and creative outlets. Without the consent of or authority from creators, AI can use algorithms that process artwork and fictional literary works created by fans to create its own “original” work. AI has the ability to be used to replace professional and amateur creative writers. Additionally, as AI technological capacity increases, it can mimic and reproduce art that resembles or belongs to a human artist.[11]

However, the main concern for artists is wondering what AI will do to creative human industries in general.[12] Additionally, legal scholars are equally as concerned about what AI means for copyright law.[13] The main type of AI that fanfiction writers are concerned about is Generative AI.[14] Essentially, huge datasets are scraped together to train the AI, and through a technical process the AI is able to devise new content that resembles the training data but isn’t identical.[15] Creators are outraged at what they consider to be theft of their artistic creations.[16] Artwork, such as illustrations for articles, books, or album covers may soon face competition from AI, undermining a thriving area of commercial art as well.[17]

Currently, fanfiction is protected under the doctrine of fair use, which allows creators to add new elements, criticism, or commentary to an already existing work, in a way that transforms it.[18] The next question likely to stem from Thaler will be whether AI creations are subject to the same protections that fan created works are.

The fear of the possible consequences of AI can be slightly assuaged through the reality that AI cannot accurately and genuinely capture human memory, thoughts, and emotional expression. These human skills will continue to make creators necessary for their connections to humanity and the ability to express that connection. How a fan resonates with a novel or T.V. show, and then produces a piece of work based on that feeling, is uniquely theirs. The decision in Thaler reaffirms this notion. AI does not offer the human creative element that is required to both receive copyright and also connect with viewers in a meaningful way.[19]

Furthermore, the difficulty with new technology like AI is that it’s impossible to immediately understand and can cause feelings of frustration or a sense of threat. Change is uncomfortable. However, with knowledge and experience, AI might be a useful tool for fanfiction creators.

The element of creative projects that make them so meaningful to people is the way that they can provide a true insight and experience that is relatable and distinctly human.[20] The alternative to banning AI or completing rendering human artists obsolete is to find a middle ground that protects both sides. The interests of technological innovation should not supersede the concerns of artists and creators.

Ultimately, as stated in Thaler, AI artwork that has no human authorship does not get copyright.[21] However, this still leaves unanswered questions that future cases will likely present before the courts. Are there protections that can be made for online creators’ artwork and fictional writings to prevent their use or presence in AI databases? The Copyright Act exists to be malleable and adaptable with time.[22] Human involvement and creative control will have to be assessed as AI becomes more prominent in personal and professional settings.

Notes

[1] Thaler v. Perlmutter, 2023 U.S. Dist. LEXIS 145823, *1.

[2] Id.

[3] Id.

[4] Id.

[5] Id.

[6] Id. at *3.

[7] Id. at *10.

[8] Id.

[9] https://www.natlawreview.com/article/judge-rules-content-generated-solely-ai-ineligible-copyright-ai-washington-report.

[10] Id.

[11] https://www.theguardian.com/artanddesign/2023/jan/23/its-the-opposite-of-art-why-illustrators-are-furious-about-ai#:~:text=AI%20doesn%27t%20do%20the,what%20AI%20art%20is%20doing.%E2%80%9D.

[12] https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney.

[13] https://www.reuters.com/legal/ai-generated-art-cannot-receive-copyrights-us-court-says-2023-08-21.

[14] https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney.

[15] Id.

[16] Id.

[17] Id.

[18] https://novelpad.co/blog/is-fanfiction-legal# (citing Campbell v. Acuff Rose Music, 510 U.S. 569 (1994).

[19] https://www.reuters.com/default/humans-vs-machines-fight-copyright-ai-art-2023-04-01/.

[20] https://news.harvard.edu/gazette/story/2023/08/is-art-generated-by-artificial-intelligence-real-art/.

[21] Thaler v. Perlmutter, 2023 U.S. Dist. LEXIS 145823, *1.

[22] Id. at *10.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


Generate a JLST Blog Post: In the Absence of Regulation, Generative AI May Be Reigned in Through the Courts

Ted Mathiowetz, MJLST Staffer

In the space of a year, artificial intelligence (AI) has seemed to have grabbed hold of the contemporary conversation of technology and calls for increased regulation. With ChatGPT’s release in late-November of 2022 as well as the release of various other art generation softwares earlier in the year, the conversation surrounding tech regulation was quickly centered onto AI. In the wake of growing Congressional focus over AI, the White House quickly proposed a blueprint for a preliminary AI Bill of Rights as fears over unregulated advances in technology have grown.[1] The debate has raged on over the potential efficacy of this Bill of Rights and if it could be enacted in time to reign in AI development.[2] But, while Washington weighs whether the current regulatory framework will effectively set some ground rules, the matter of AI has already begun to be litigated.[3]

The growing fear over the power of AI has been mounting in numerous sectors as ChatGPT has proven its capabilities to pass exams such as the Multistate Bar Exam,[4] the US Medical Exam, and more.[5] Fears over AI’s capabilities and potential advancements are not just reaching academia either. The legal industry is already circling the wagons to prevent AI lawyers from representing would-be clients in court.[6] Edelson, a law firm based in Chicago, filed a class action complaint in California state court alleging that DoNotPay, an AI service that markets itself as “the world’s first robot lawyer” unlawfully provides a range of legal services.[7] The complaint alleges that DoNotPay is engaging in unlawful business practice by “holding itself out to be an attorney”[8] and “engaging in the unlawful practice of law by selling legal services… when it was not licensed to practice law.”[9]

Additional litigation has been filed against the makers of AI art generators, alleging copyright violations.[10]  The plaintiffs argue that a swath of AI firms have violated the Digital Millennium Copyright Act in constructing their AI models by using software that copied millions of images as a reference for the AI in building out user-requested images without compensation for those whose images were copied.[11] Notably, both of these suits are class-action lawsuits[12] and may serve as a strong blueprint for how weary parties can reign in AI through the court system.

Faridian v. DONOTPAY, Inc. — The Licensing Case

AI is here to stay for the legal industry, for better or worse.[13] However, where some have been sounding the alarm for years that AI will replace lawyers altogether,[14] the truth is likely to be quite different, with AI becoming a tool that helps lawyers become more efficient.[15] There are nonetheless existential threats to the industry as is seen in the Faridian case whereby DoNotPay is allowing people to write wills, contracts, and more without the help of a trained legal professional. This has led to shoddy AI-generated work, which creates concern that AI legal technology will likely lead to more troublesome legal action down-the-line for its users.[16]

It seems as though the AI Lawyer revolution may not be around to stay much longer as, in addition to the Faridian case, which sees DoNotPay being sued for their robot lawyer mainly engaging in transactional work, they have also run into problems trying to litigate. DoNotPay tried to get their AI Attorney into court to dispute traffic tickets and were later “forced” to withdraw the technology’s help in court after “multiple state bar associations [threatened]” to sue and they were cautioned that the move could see potential prison time for the CEO, Joshua Browder.[17]

Given that most states require applicants to the bar to 1) complete a Juris Doctor program from an accredited institution, 2) pass the bar exam, and 3) pass moral character evaluations in order to practice law, it’s rather likely that robot lawyers will not see a courtroom for some time, if ever. Instead, there may be a pro se revolution of sorts wherein litigants aid themselves with the help of AI legal services outside of the courtroom.[18] But, for the most part the legal field will likely incorporate AI into its repository of technology rather than be replaced by it. Nevertheless, the Faridian case, depending on its outcome, will likely provide a clear path forward for occupations with extensive licensing requirements that are endangered by AI advancement to litigate.

Sarah Andersen et al., v. Stability AI Ltd. — The Copyright Case

For occupations which do not have barriers to entry in the same way the legal field does, there is another way forward in the courts to try and stem the tide of AI in the absence of regulation. In the Andersen case, a class of artists have brought suit against various AI Art generation companies for infringing upon their copyrighted artwork by using their work to create the reference framework for their generated images.[19] The function of the generative AI is relatively straightforward. For example, if I were to log-on to an AI art generator and type in “Generate Lionel Messi in the style of Vincent Van Gogh” it would produce an image of Lionel Messi in the style of Van Gogh’s “Self-Portrait with a Bandaged Ear.” There is no copyright on Van Gogh’s artwork, but the AI accesses all kinds of copyrighted artwork in the style of Van Gogh for reference points as well as copyrighted images of Lionel Messi to create the generated image. The AI Image services have thus created a multitude of legal issues that their parent companies face including claims of direct copyright Infringement by storing copies of the works in building out the system, vicarious copyright Infringement when consumers generate artwork in the style of a given artist, and DMCA violations by not properly attributing existing work, among other claims.[20]

This case is being watched and is already being hotly debated as a ruling against AI could lead to claims against other generative AI such as ChatGPT for not properly attributing or paying for material that it’s used in building out its AI.[21] Defendants have claimed that the use of copyrighted material constitutes fair use, but these claims have not yet been fully litigated, so we will have to wait for a decision to come down on that front.[22] It is clear that as fast as generative AI seemed to take hold of the world, litigation has ramped up calling its future into question. Other world governments are also becoming increasingly weary of the technology, with Italy already banning ChatGPT and Germany heavily considering it, citing “data security concerns.”[23] It remains to be seen how the United States will deal with this new technology in terms of regulation or an outright ban, but it’s clear that the current battleground is in the courts.

Notes

[1] See Blueprint for an AI Bill of Rights, The White House (Oct. 5, 2022), https://www.whitehouse.gov/ostp/ai-bill-of-rights/; Pranshu Verma, The AI ‘Gold Rush’ is Here. What will it Bring? Wash. Post (Jan. 20, 2023), https://www.washingtonpost.com/technology/2023/01/07/ai-2023-predictions/.

[2] See Luke Hughest, Is an AI Bill of Rights Enough?, TechRadar (Dec. 10, 2022), https://www.techradar.com/features/is-an-ai-bill-of-rights-enough; see also Ashley Gold, AI Rockets ahead in Vacuum of U.S. Regulation, Axios (Jan. 30, 2023), https://www.axios.com/2023/01/30/ai-chatgpt-regulation-laws.

[3] Ashley Gold supra note 2.

[4] Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Score nearing 90th Percentile, ABA J. (Mar. 16, 2023), https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile.

[5] See e.g., Lakshmi Varanasi, OpenAI just announced GPT-4, an Updated Chatbot that can pass everything from a Bar Exam to AP Biology. Here’s a list of Difficult Exams both AI Versions have passed., Bus. Insider (Mar. 21, 2023), https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1.

[6] Stephanie Stacey, ‘Robot Lawyer’ DoNotPay is being Sued by a Law Firm because it ‘does not have a Law Degree’, Bus. Insider(Mar. 12, 2023), https://www.businessinsider.com/robot-lawyer-ai-donotpay-sued-practicing-law-without-a-license-2023-3

[7] Sara Merken, Lawsuit Pits Class Action Firm against ‘Robot Lawyer’ DoNotPay, Reuters (Mar. 9, 2023), https://www.reuters.com/legal/lawsuit-pits-class-action-firm-against-robot-lawyer-donotpay-2023-03-09/.

[8] Complaint at 2, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[9] Id. at 10.

[10] Riddhi Setty, First AI Art Generator Lawsuits Threaten Future of Emerging Tech, Bloomberg L. (Jan. 20, 2023), https://news.bloomberglaw.com/ip-law/first-ai-art-generator-lawsuits-threaten-future-of-emerging-tech.

[11] Complaint at 1, 13, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[12] Id. at 12; Complaint at 1, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[13] See e.g., Chris Stokel-Walker, Generative AI is Coming for the Lawyers, Wired (Feb. 21, 2023), https://www.wired.com/story/chatgpt-generative-ai-is-coming-for-the-lawyers/.

[14] Dan Mangan, Lawyers could be the Next Profession to be Replaced by Computers, CNBC (Feb.17, 2017), https://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-by-artificial-intelligence.html.

[15] Stokel-Walker, supra note 13.

[16] Complaint at 7, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[17] Debra Cassens Weiss, Traffic Court Defendants lose their ‘Robot Lawyer’, ABA J. (Jan. 26, 2023), https://www.abajournal.com/news/article/traffic-court-defendants-lose-their-robot-lawyer#:~:text=Joshua%20Browder%2C%20a%202017%20ABA,motorists%20contest%20their%20traffic%20tickets..

[18] See Justin Snyder, RoboCourt: How Artificial Intelligence can help Pro Se Litigants and Create a “Fairer” Judiciary, 10 Ind. J.L. & Soc. Equality 200 (2022).

[19] See Complaint, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[20] Id. at 10–12.

[21] See e.g., Dr. Lance B. Eliot, Legal Doomsday for Generative AI ChatGPT if Caught Plagiarizing or Infringing, warns AI Ethics and AI Law, Forbes (Feb. 26, 2023), https://www.forbes.com/sites/lanceeliot/2023/02/26/legal-doomsday-for-generative-ai-chatgpt-if-caught-plagiarizing-or-infringing-warns-ai-ethics-and-ai-law/?sh=790aecab122b.

[22] Ron. N. Dreben, Generative Artificial Intelligence and Copyright Current Issues, Morgan Lewis (Mar. 23, 2023), https://www.morganlewis.com/pubs/2023/03/generative-artificial-intelligence-and-copyright-current-issues.

[23] Nick Vivarelli, Italy’s Ban on ChatGPT Sparks Controversy as Local Industry Spars with Silicon Valley on other Matters, Yahoo! (Apr. 3, 2023), https://www.yahoo.com/entertainment/italy-ban-chatgpt-sparks-controversy-111415503.html; Adam Rowe, Germany might Block ChatGPT over Data Security Concerns, Tech.Co (Apr. 3, 2023), https://tech.co/news/germany-chatgpt-data-security.


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.