ChatGPT

Generate a JLST Blog Post: In the Absence of Regulation, Generative AI May Be Reigned in Through the Courts

Ted Mathiowetz, MJLST Staffer

In the space of a year, artificial intelligence (AI) has seemed to have grabbed hold of the contemporary conversation of technology and calls for increased regulation. With ChatGPT’s release in late-November of 2022 as well as the release of various other art generation softwares earlier in the year, the conversation surrounding tech regulation was quickly centered onto AI. In the wake of growing Congressional focus over AI, the White House quickly proposed a blueprint for a preliminary AI Bill of Rights as fears over unregulated advances in technology have grown.[1] The debate has raged on over the potential efficacy of this Bill of Rights and if it could be enacted in time to reign in AI development.[2] But, while Washington weighs whether the current regulatory framework will effectively set some ground rules, the matter of AI has already begun to be litigated.[3]

The growing fear over the power of AI has been mounting in numerous sectors as ChatGPT has proven its capabilities to pass exams such as the Multistate Bar Exam,[4] the US Medical Exam, and more.[5] Fears over AI’s capabilities and potential advancements are not just reaching academia either. The legal industry is already circling the wagons to prevent AI lawyers from representing would-be clients in court.[6] Edelson, a law firm based in Chicago, filed a class action complaint in California state court alleging that DoNotPay, an AI service that markets itself as “the world’s first robot lawyer” unlawfully provides a range of legal services.[7] The complaint alleges that DoNotPay is engaging in unlawful business practice by “holding itself out to be an attorney”[8] and “engaging in the unlawful practice of law by selling legal services… when it was not licensed to practice law.”[9]

Additional litigation has been filed against the makers of AI art generators, alleging copyright violations.[10]  The plaintiffs argue that a swath of AI firms have violated the Digital Millennium Copyright Act in constructing their AI models by using software that copied millions of images as a reference for the AI in building out user-requested images without compensation for those whose images were copied.[11] Notably, both of these suits are class-action lawsuits[12] and may serve as a strong blueprint for how weary parties can reign in AI through the court system.

Faridian v. DONOTPAY, Inc. — The Licensing Case

AI is here to stay for the legal industry, for better or worse.[13] However, where some have been sounding the alarm for years that AI will replace lawyers altogether,[14] the truth is likely to be quite different, with AI becoming a tool that helps lawyers become more efficient.[15] There are nonetheless existential threats to the industry as is seen in the Faridian case whereby DoNotPay is allowing people to write wills, contracts, and more without the help of a trained legal professional. This has led to shoddy AI-generated work, which creates concern that AI legal technology will likely lead to more troublesome legal action down-the-line for its users.[16]

It seems as though the AI Lawyer revolution may not be around to stay much longer as, in addition to the Faridian case, which sees DoNotPay being sued for their robot lawyer mainly engaging in transactional work, they have also run into problems trying to litigate. DoNotPay tried to get their AI Attorney into court to dispute traffic tickets and were later “forced” to withdraw the technology’s help in court after “multiple state bar associations [threatened]” to sue and they were cautioned that the move could see potential prison time for the CEO, Joshua Browder.[17]

Given that most states require applicants to the bar to 1) complete a Juris Doctor program from an accredited institution, 2) pass the bar exam, and 3) pass moral character evaluations in order to practice law, it’s rather likely that robot lawyers will not see a courtroom for some time, if ever. Instead, there may be a pro se revolution of sorts wherein litigants aid themselves with the help of AI legal services outside of the courtroom.[18] But, for the most part the legal field will likely incorporate AI into its repository of technology rather than be replaced by it. Nevertheless, the Faridian case, depending on its outcome, will likely provide a clear path forward for occupations with extensive licensing requirements that are endangered by AI advancement to litigate.

Sarah Andersen et al., v. Stability AI Ltd. — The Copyright Case

For occupations which do not have barriers to entry in the same way the legal field does, there is another way forward in the courts to try and stem the tide of AI in the absence of regulation. In the Andersen case, a class of artists have brought suit against various AI Art generation companies for infringing upon their copyrighted artwork by using their work to create the reference framework for their generated images.[19] The function of the generative AI is relatively straightforward. For example, if I were to log-on to an AI art generator and type in “Generate Lionel Messi in the style of Vincent Van Gogh” it would produce an image of Lionel Messi in the style of Van Gogh’s “Self-Portrait with a Bandaged Ear.” There is no copyright on Van Gogh’s artwork, but the AI accesses all kinds of copyrighted artwork in the style of Van Gogh for reference points as well as copyrighted images of Lionel Messi to create the generated image. The AI Image services have thus created a multitude of legal issues that their parent companies face including claims of direct copyright Infringement by storing copies of the works in building out the system, vicarious copyright Infringement when consumers generate artwork in the style of a given artist, and DMCA violations by not properly attributing existing work, among other claims.[20]

This case is being watched and is already being hotly debated as a ruling against AI could lead to claims against other generative AI such as ChatGPT for not properly attributing or paying for material that it’s used in building out its AI.[21] Defendants have claimed that the use of copyrighted material constitutes fair use, but these claims have not yet been fully litigated, so we will have to wait for a decision to come down on that front.[22] It is clear that as fast as generative AI seemed to take hold of the world, litigation has ramped up calling its future into question. Other world governments are also becoming increasingly weary of the technology, with Italy already banning ChatGPT and Germany heavily considering it, citing “data security concerns.”[23] It remains to be seen how the United States will deal with this new technology in terms of regulation or an outright ban, but it’s clear that the current battleground is in the courts.

Notes

[1] See Blueprint for an AI Bill of Rights, The White House (Oct. 5, 2022), https://www.whitehouse.gov/ostp/ai-bill-of-rights/; Pranshu Verma, The AI ‘Gold Rush’ is Here. What will it Bring? Wash. Post (Jan. 20, 2023), https://www.washingtonpost.com/technology/2023/01/07/ai-2023-predictions/.

[2] See Luke Hughest, Is an AI Bill of Rights Enough?, TechRadar (Dec. 10, 2022), https://www.techradar.com/features/is-an-ai-bill-of-rights-enough; see also Ashley Gold, AI Rockets ahead in Vacuum of U.S. Regulation, Axios (Jan. 30, 2023), https://www.axios.com/2023/01/30/ai-chatgpt-regulation-laws.

[3] Ashley Gold supra note 2.

[4] Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Score nearing 90th Percentile, ABA J. (Mar. 16, 2023), https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile.

[5] See e.g., Lakshmi Varanasi, OpenAI just announced GPT-4, an Updated Chatbot that can pass everything from a Bar Exam to AP Biology. Here’s a list of Difficult Exams both AI Versions have passed., Bus. Insider (Mar. 21, 2023), https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1.

[6] Stephanie Stacey, ‘Robot Lawyer’ DoNotPay is being Sued by a Law Firm because it ‘does not have a Law Degree’, Bus. Insider(Mar. 12, 2023), https://www.businessinsider.com/robot-lawyer-ai-donotpay-sued-practicing-law-without-a-license-2023-3

[7] Sara Merken, Lawsuit Pits Class Action Firm against ‘Robot Lawyer’ DoNotPay, Reuters (Mar. 9, 2023), https://www.reuters.com/legal/lawsuit-pits-class-action-firm-against-robot-lawyer-donotpay-2023-03-09/.

[8] Complaint at 2, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[9] Id. at 10.

[10] Riddhi Setty, First AI Art Generator Lawsuits Threaten Future of Emerging Tech, Bloomberg L. (Jan. 20, 2023), https://news.bloomberglaw.com/ip-law/first-ai-art-generator-lawsuits-threaten-future-of-emerging-tech.

[11] Complaint at 1, 13, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[12] Id. at 12; Complaint at 1, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[13] See e.g., Chris Stokel-Walker, Generative AI is Coming for the Lawyers, Wired (Feb. 21, 2023), https://www.wired.com/story/chatgpt-generative-ai-is-coming-for-the-lawyers/.

[14] Dan Mangan, Lawyers could be the Next Profession to be Replaced by Computers, CNBC (Feb.17, 2017), https://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-by-artificial-intelligence.html.

[15] Stokel-Walker, supra note 13.

[16] Complaint at 7, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[17] Debra Cassens Weiss, Traffic Court Defendants lose their ‘Robot Lawyer’, ABA J. (Jan. 26, 2023), https://www.abajournal.com/news/article/traffic-court-defendants-lose-their-robot-lawyer#:~:text=Joshua%20Browder%2C%20a%202017%20ABA,motorists%20contest%20their%20traffic%20tickets..

[18] See Justin Snyder, RoboCourt: How Artificial Intelligence can help Pro Se Litigants and Create a “Fairer” Judiciary, 10 Ind. J.L. & Soc. Equality 200 (2022).

[19] See Complaint, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[20] Id. at 10–12.

[21] See e.g., Dr. Lance B. Eliot, Legal Doomsday for Generative AI ChatGPT if Caught Plagiarizing or Infringing, warns AI Ethics and AI Law, Forbes (Feb. 26, 2023), https://www.forbes.com/sites/lanceeliot/2023/02/26/legal-doomsday-for-generative-ai-chatgpt-if-caught-plagiarizing-or-infringing-warns-ai-ethics-and-ai-law/?sh=790aecab122b.

[22] Ron. N. Dreben, Generative Artificial Intelligence and Copyright Current Issues, Morgan Lewis (Mar. 23, 2023), https://www.morganlewis.com/pubs/2023/03/generative-artificial-intelligence-and-copyright-current-issues.

[23] Nick Vivarelli, Italy’s Ban on ChatGPT Sparks Controversy as Local Industry Spars with Silicon Valley on other Matters, Yahoo! (Apr. 3, 2023), https://www.yahoo.com/entertainment/italy-ban-chatgpt-sparks-controversy-111415503.html; Adam Rowe, Germany might Block ChatGPT over Data Security Concerns, Tech.Co (Apr. 3, 2023), https://tech.co/news/germany-chatgpt-data-security.


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.