Inventions

Generate a JLST Blog Post: In the Absence of Regulation, Generative AI May Be Reigned in Through the Courts

Ted Mathiowetz, MJLST Staffer

In the space of a year, artificial intelligence (AI) has seemed to have grabbed hold of the contemporary conversation of technology and calls for increased regulation. With ChatGPT’s release in late-November of 2022 as well as the release of various other art generation softwares earlier in the year, the conversation surrounding tech regulation was quickly centered onto AI. In the wake of growing Congressional focus over AI, the White House quickly proposed a blueprint for a preliminary AI Bill of Rights as fears over unregulated advances in technology have grown.[1] The debate has raged on over the potential efficacy of this Bill of Rights and if it could be enacted in time to reign in AI development.[2] But, while Washington weighs whether the current regulatory framework will effectively set some ground rules, the matter of AI has already begun to be litigated.[3]

The growing fear over the power of AI has been mounting in numerous sectors as ChatGPT has proven its capabilities to pass exams such as the Multistate Bar Exam,[4] the US Medical Exam, and more.[5] Fears over AI’s capabilities and potential advancements are not just reaching academia either. The legal industry is already circling the wagons to prevent AI lawyers from representing would-be clients in court.[6] Edelson, a law firm based in Chicago, filed a class action complaint in California state court alleging that DoNotPay, an AI service that markets itself as “the world’s first robot lawyer” unlawfully provides a range of legal services.[7] The complaint alleges that DoNotPay is engaging in unlawful business practice by “holding itself out to be an attorney”[8] and “engaging in the unlawful practice of law by selling legal services… when it was not licensed to practice law.”[9]

Additional litigation has been filed against the makers of AI art generators, alleging copyright violations.[10]  The plaintiffs argue that a swath of AI firms have violated the Digital Millennium Copyright Act in constructing their AI models by using software that copied millions of images as a reference for the AI in building out user-requested images without compensation for those whose images were copied.[11] Notably, both of these suits are class-action lawsuits[12] and may serve as a strong blueprint for how weary parties can reign in AI through the court system.

Faridian v. DONOTPAY, Inc. — The Licensing Case

AI is here to stay for the legal industry, for better or worse.[13] However, where some have been sounding the alarm for years that AI will replace lawyers altogether,[14] the truth is likely to be quite different, with AI becoming a tool that helps lawyers become more efficient.[15] There are nonetheless existential threats to the industry as is seen in the Faridian case whereby DoNotPay is allowing people to write wills, contracts, and more without the help of a trained legal professional. This has led to shoddy AI-generated work, which creates concern that AI legal technology will likely lead to more troublesome legal action down-the-line for its users.[16]

It seems as though the AI Lawyer revolution may not be around to stay much longer as, in addition to the Faridian case, which sees DoNotPay being sued for their robot lawyer mainly engaging in transactional work, they have also run into problems trying to litigate. DoNotPay tried to get their AI Attorney into court to dispute traffic tickets and were later “forced” to withdraw the technology’s help in court after “multiple state bar associations [threatened]” to sue and they were cautioned that the move could see potential prison time for the CEO, Joshua Browder.[17]

Given that most states require applicants to the bar to 1) complete a Juris Doctor program from an accredited institution, 2) pass the bar exam, and 3) pass moral character evaluations in order to practice law, it’s rather likely that robot lawyers will not see a courtroom for some time, if ever. Instead, there may be a pro se revolution of sorts wherein litigants aid themselves with the help of AI legal services outside of the courtroom.[18] But, for the most part the legal field will likely incorporate AI into its repository of technology rather than be replaced by it. Nevertheless, the Faridian case, depending on its outcome, will likely provide a clear path forward for occupations with extensive licensing requirements that are endangered by AI advancement to litigate.

Sarah Andersen et al., v. Stability AI Ltd. — The Copyright Case

For occupations which do not have barriers to entry in the same way the legal field does, there is another way forward in the courts to try and stem the tide of AI in the absence of regulation. In the Andersen case, a class of artists have brought suit against various AI Art generation companies for infringing upon their copyrighted artwork by using their work to create the reference framework for their generated images.[19] The function of the generative AI is relatively straightforward. For example, if I were to log-on to an AI art generator and type in “Generate Lionel Messi in the style of Vincent Van Gogh” it would produce an image of Lionel Messi in the style of Van Gogh’s “Self-Portrait with a Bandaged Ear.” There is no copyright on Van Gogh’s artwork, but the AI accesses all kinds of copyrighted artwork in the style of Van Gogh for reference points as well as copyrighted images of Lionel Messi to create the generated image. The AI Image services have thus created a multitude of legal issues that their parent companies face including claims of direct copyright Infringement by storing copies of the works in building out the system, vicarious copyright Infringement when consumers generate artwork in the style of a given artist, and DMCA violations by not properly attributing existing work, among other claims.[20]

This case is being watched and is already being hotly debated as a ruling against AI could lead to claims against other generative AI such as ChatGPT for not properly attributing or paying for material that it’s used in building out its AI.[21] Defendants have claimed that the use of copyrighted material constitutes fair use, but these claims have not yet been fully litigated, so we will have to wait for a decision to come down on that front.[22] It is clear that as fast as generative AI seemed to take hold of the world, litigation has ramped up calling its future into question. Other world governments are also becoming increasingly weary of the technology, with Italy already banning ChatGPT and Germany heavily considering it, citing “data security concerns.”[23] It remains to be seen how the United States will deal with this new technology in terms of regulation or an outright ban, but it’s clear that the current battleground is in the courts.

Notes

[1] See Blueprint for an AI Bill of Rights, The White House (Oct. 5, 2022), https://www.whitehouse.gov/ostp/ai-bill-of-rights/; Pranshu Verma, The AI ‘Gold Rush’ is Here. What will it Bring? Wash. Post (Jan. 20, 2023), https://www.washingtonpost.com/technology/2023/01/07/ai-2023-predictions/.

[2] See Luke Hughest, Is an AI Bill of Rights Enough?, TechRadar (Dec. 10, 2022), https://www.techradar.com/features/is-an-ai-bill-of-rights-enough; see also Ashley Gold, AI Rockets ahead in Vacuum of U.S. Regulation, Axios (Jan. 30, 2023), https://www.axios.com/2023/01/30/ai-chatgpt-regulation-laws.

[3] Ashley Gold supra note 2.

[4] Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Score nearing 90th Percentile, ABA J. (Mar. 16, 2023), https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile.

[5] See e.g., Lakshmi Varanasi, OpenAI just announced GPT-4, an Updated Chatbot that can pass everything from a Bar Exam to AP Biology. Here’s a list of Difficult Exams both AI Versions have passed., Bus. Insider (Mar. 21, 2023), https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1.

[6] Stephanie Stacey, ‘Robot Lawyer’ DoNotPay is being Sued by a Law Firm because it ‘does not have a Law Degree’, Bus. Insider(Mar. 12, 2023), https://www.businessinsider.com/robot-lawyer-ai-donotpay-sued-practicing-law-without-a-license-2023-3

[7] Sara Merken, Lawsuit Pits Class Action Firm against ‘Robot Lawyer’ DoNotPay, Reuters (Mar. 9, 2023), https://www.reuters.com/legal/lawsuit-pits-class-action-firm-against-robot-lawyer-donotpay-2023-03-09/.

[8] Complaint at 2, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[9] Id. at 10.

[10] Riddhi Setty, First AI Art Generator Lawsuits Threaten Future of Emerging Tech, Bloomberg L. (Jan. 20, 2023), https://news.bloomberglaw.com/ip-law/first-ai-art-generator-lawsuits-threaten-future-of-emerging-tech.

[11] Complaint at 1, 13, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[12] Id. at 12; Complaint at 1, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[13] See e.g., Chris Stokel-Walker, Generative AI is Coming for the Lawyers, Wired (Feb. 21, 2023), https://www.wired.com/story/chatgpt-generative-ai-is-coming-for-the-lawyers/.

[14] Dan Mangan, Lawyers could be the Next Profession to be Replaced by Computers, CNBC (Feb.17, 2017), https://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-by-artificial-intelligence.html.

[15] Stokel-Walker, supra note 13.

[16] Complaint at 7, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[17] Debra Cassens Weiss, Traffic Court Defendants lose their ‘Robot Lawyer’, ABA J. (Jan. 26, 2023), https://www.abajournal.com/news/article/traffic-court-defendants-lose-their-robot-lawyer#:~:text=Joshua%20Browder%2C%20a%202017%20ABA,motorists%20contest%20their%20traffic%20tickets..

[18] See Justin Snyder, RoboCourt: How Artificial Intelligence can help Pro Se Litigants and Create a “Fairer” Judiciary, 10 Ind. J.L. & Soc. Equality 200 (2022).

[19] See Complaint, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[20] Id. at 10–12.

[21] See e.g., Dr. Lance B. Eliot, Legal Doomsday for Generative AI ChatGPT if Caught Plagiarizing or Infringing, warns AI Ethics and AI Law, Forbes (Feb. 26, 2023), https://www.forbes.com/sites/lanceeliot/2023/02/26/legal-doomsday-for-generative-ai-chatgpt-if-caught-plagiarizing-or-infringing-warns-ai-ethics-and-ai-law/?sh=790aecab122b.

[22] Ron. N. Dreben, Generative Artificial Intelligence and Copyright Current Issues, Morgan Lewis (Mar. 23, 2023), https://www.morganlewis.com/pubs/2023/03/generative-artificial-intelligence-and-copyright-current-issues.

[23] Nick Vivarelli, Italy’s Ban on ChatGPT Sparks Controversy as Local Industry Spars with Silicon Valley on other Matters, Yahoo! (Apr. 3, 2023), https://www.yahoo.com/entertainment/italy-ban-chatgpt-sparks-controversy-111415503.html; Adam Rowe, Germany might Block ChatGPT over Data Security Concerns, Tech.Co (Apr. 3, 2023), https://tech.co/news/germany-chatgpt-data-security.


Only Humans Are Allowed: Federal Circuit Says No to “AI Inventors”

Vivian Lin, MJLST Staffer

On August 5, 2022, the U.S. Court of Appeals for the Federal Circuit affirmed the U.S. District for the Eastern Division of Virginia’s decision that artificial intelligence (AI) cannot be an “inventor” on a patent application,[1] joining many other jurisdictions in confirming that only a natural person can be an “inventor”.[2] Currently, South Africa remains the only jurisdiction that has granted Dr. Stephan Thaler’s patent naming DABUS, an AI, as the sole inventor of two patentable inventions.[3] With the release of the Federal Circuit’s opinion refusing to recognize AI as an inventor, Dr. Thaler’s fight to credit AI for inventions reaches a plateau. 

DABUS, formally known as Device for the Autonomous Bootstrapping of Unified Sentience, is an AI-based creativity machine created by Dr. Stephan Thaler, the founder of the software company Imagination Engine Inc. Dr. Thaler claimed that DABUS independently invented two patentable inventions: The Factual Container and the Neural Flame. For the past few years, Dr. Thaler has been in battle with patent offices around the world trying to receive patents for these two inventions. Until this date, every patent office, except one,[4] has refused to grant the patents on the grounds that the applications do not name a natural person as the inventor. 

The inventor of a patent being a natural person is a legal requirement in many jurisdictions. The recent Federal Circuit opinion ruled mainly based on statutory interpretation, arguing that the text is clear in requiring a natural person to be the inventor.[5] Though there are many jurisdictions that have left the term “inventor” undefined, it seems to be a general agreement that an inventor should be a natural person.[6]

Is DABUS the True Inventor?

There are many issues centered around AI inventorship. The first is whether AI can be the true inventor, and subsequently take credit for an invention, even though a human created the AI itself. Here it becomes necessary to inquire into whether there was human intervention during the discovery process, and if so, what type of intervention was involved. It might be the case that a natural human was the actual inventor of a product while AI only assisted in carrying out that idea. For example, when a developer designed the AI with a particular question in mind and carefully selected the training data, the AI is only assisting the invention while the developer is seen as the true inventor.[7] In analyzing the DABUS case, Dr. Rita Matulionyte, a senior lecturer at Macquarie Law School in Australia and an expert in intellectual property and information technology law, has argued that DABUS is not the true inventor because Dr. Thaler’s role in the inventions was unquestionable, assuming he formulated the problem, developed the algorithm, created the training date, etc.[8] 

However, it is a closer question when both AI and human effort are important for the invention. For example, AI might identify the compound for a new drug, but to conclude the discovery, a scientist still has to test the compound.[9] The U.S. patent law requires that the “inventor must contribute to the conception of the invention.”[10] Further defined, conception is “the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”[11] In the drug discovery scenario, it is difficult to determine who invented the new drug. Neither the AI developers nor the scientists fit the definition of “inventor”: The AI developers and trainers only built and trained the algorithm without any knowledge of the potential discovery while the scientists only confirmed the final discovery without contributing to the development of the algorithm or the discovery of the drug.[12] In this scenario, it is likely the AI did the majority of the work and made the important discovery itself, and should thus be the inventor of the new compound.[13]

The debate on who is the true inventor is important because mislabeling the inventor can cause serious consequences. Legally, improper inventorship attribution may cause a patent application to be denied, or it may lead to the later invalidation of a granted patent. Practically speaking, human inventors are able to take credit for their invention and that honor comes with recognition which may incentive future creative inventions. Thus, a misattribution may harm human inventiveness as true inventors could be discouraged by not being recognized for their contributions. 

Should AI-Generated Inventions be Patentable?

While concluding that AI is the sole inventor of an invention may be difficult as outlined in the previous section, what happens when AI is found to be the true, sole inventor? Society’s discussion on whether AI inventions should be patented focuses mostly on policy arguments. Dr. Thaler and Ryan Abbott, a law professor and the lead of Thaler’s legal team, have argued that allowing patent protection for AI-generated inventions will encourage developers to invest time in building more creative machines that will eventually lead to more inventions in the future.[14] They also argued that crediting AI for inventorship will protect the rights of human inventors.[15] For example, it cuts out the possibility of one person taking credit for another’s invention, which often happens when students participate in university research but are overlooked on patent applications.[16] Without patent applicability, the patent system’s required disclosure of inventions, it is very likely that owners of AI will keep inventions secret and privately benefit from the monopoly for however long it takes the rest of society to figure it out independently.[17] 

Some critics argue against Thaler and Abbott’s view. For one, they believe that AI at its current stage is not autonomous enough to be an inventor and human effort should be properly credited.[18] Even if AI can independently invent, its inventions should not be patentable because once it is, there will be too many patented inventions by AI in the same field owned by the same group of people who have access to these machines.[19] That will prevent smaller companies from entering into this field, having a negative effect on human inventiveness.[20]  Finally, there has been a concern that not granting patents to AI-invented creations will let AI owners keep the inventions as trade secrets, leading to a potential long-term monopoly. However, that might not be a big concern as inventions like the two created by DABUS are likely to be easily reverse engineered once they reach the market.[21]

Currently, Dr. Thaler plans to file appeals in each jurisdiction that has rejected his application and aims to seek copyright protection as an alternative in the U.S. It is questionable that Dr. Thaler will succeed on those appeals, but if he ever does, it will likely result in major changes to patent systems around the world. Even if most jurisdictions today forbid AI from being classified as an inventor, with the advancement of technology the need to address this issue will become more and more pressing as time goes on. 

Notes

[1] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).

[2] Ryan Abbott, July 2022 AIP Update Around the World, The Artificial Inventor Project (July 10, 2022), https://artificialinventor.com/867-2/.

[3] Id.

[4] South Africa’s patent law does not have a requirement on inventors being a natural person. Jordana Goodman, Homography of Inventorship: DABUS And Valuing Inventors, 20 Duke L. & Tech. Rev. 1, 17 (2022).

[5] Thaler, 43 F.4th at 1209, 1213.

[6] Goodman, supra note 4, at 10.

[7] Ryan Abbott, The Artificial Inventor Project, WIPO Magazine (Dec. 2019), https://www.wipo.int/wipo_magazine/en/2019/06/article_0002.html.

[8] Rita Matulionyte, AI as an Inventor: Has the Federal Court of Australia Erred in DABUS? 12 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3974219.

[9] Susan Krumplitsch et al. Can An AI System Be Named the Inventor? In Wake Of EDVA Decision, Questions Remain, DLA Piper (Sept. 13, 2019), https://www.dlapiper.com/en/us/insights/publications/2021/09/can-an-ai-system-be-named-the-inventor/#11

[10] 2109 Inventorship, USPTO, https://www.uspto.gov/web/offices/pac/mpep/s2109.html (last visited Oct. 8, 2022).

[11] Hybritech, Inc. v. Monoclonal Antibodies, Inc., 802 F.2d 1367, 1376 (Fed. Cir. 1986).

[12] Krumplitsch et al., supra note 9.

[13] Yosuke Watanabe, I, Inventor: Patent Inventorship for Artificial Intelligence Systems, 57 Idaho L. Rev. 473, 290.

[14] Abbott, supra note 2.

[15] Id.

[16] Goodman, supra note 4, at 21.

[17] Abbott, supra note 2.

[18] Matulionyte, supra note 8, at 10–14.

[19] Id. at 19.

[20] Id.

[21] Id. at 18.