AI

Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Only Humans Are Allowed: Federal Circuit Says No to “AI Inventors”

Vivian Lin, MJLST Staffer

On August 5, 2022, the U.S. Court of Appeals for the Federal Circuit affirmed the U.S. District for the Eastern Division of Virginia’s decision that artificial intelligence (AI) cannot be an “inventor” on a patent application,[1] joining many other jurisdictions in confirming that only a natural person can be an “inventor”.[2] Currently, South Africa remains the only jurisdiction that has granted Dr. Stephan Thaler’s patent naming DABUS, an AI, as the sole inventor of two patentable inventions.[3] With the release of the Federal Circuit’s opinion refusing to recognize AI as an inventor, Dr. Thaler’s fight to credit AI for inventions reaches a plateau. 

DABUS, formally known as Device for the Autonomous Bootstrapping of Unified Sentience, is an AI-based creativity machine created by Dr. Stephan Thaler, the founder of the software company Imagination Engine Inc. Dr. Thaler claimed that DABUS independently invented two patentable inventions: The Factual Container and the Neural Flame. For the past few years, Dr. Thaler has been in battle with patent offices around the world trying to receive patents for these two inventions. Until this date, every patent office, except one,[4] has refused to grant the patents on the grounds that the applications do not name a natural person as the inventor. 

The inventor of a patent being a natural person is a legal requirement in many jurisdictions. The recent Federal Circuit opinion ruled mainly based on statutory interpretation, arguing that the text is clear in requiring a natural person to be the inventor.[5] Though there are many jurisdictions that have left the term “inventor” undefined, it seems to be a general agreement that an inventor should be a natural person.[6]

Is DABUS the True Inventor?

There are many issues centered around AI inventorship. The first is whether AI can be the true inventor, and subsequently take credit for an invention, even though a human created the AI itself. Here it becomes necessary to inquire into whether there was human intervention during the discovery process, and if so, what type of intervention was involved. It might be the case that a natural human was the actual inventor of a product while AI only assisted in carrying out that idea. For example, when a developer designed the AI with a particular question in mind and carefully selected the training data, the AI is only assisting the invention while the developer is seen as the true inventor.[7] In analyzing the DABUS case, Dr. Rita Matulionyte, a senior lecturer at Macquarie Law School in Australia and an expert in intellectual property and information technology law, has argued that DABUS is not the true inventor because Dr. Thaler’s role in the inventions was unquestionable, assuming he formulated the problem, developed the algorithm, created the training date, etc.[8] 

However, it is a closer question when both AI and human effort are important for the invention. For example, AI might identify the compound for a new drug, but to conclude the discovery, a scientist still has to test the compound.[9] The U.S. patent law requires that the “inventor must contribute to the conception of the invention.”[10] Further defined, conception is “the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”[11] In the drug discovery scenario, it is difficult to determine who invented the new drug. Neither the AI developers nor the scientists fit the definition of “inventor”: The AI developers and trainers only built and trained the algorithm without any knowledge of the potential discovery while the scientists only confirmed the final discovery without contributing to the development of the algorithm or the discovery of the drug.[12] In this scenario, it is likely the AI did the majority of the work and made the important discovery itself, and should thus be the inventor of the new compound.[13]

The debate on who is the true inventor is important because mislabeling the inventor can cause serious consequences. Legally, improper inventorship attribution may cause a patent application to be denied, or it may lead to the later invalidation of a granted patent. Practically speaking, human inventors are able to take credit for their invention and that honor comes with recognition which may incentive future creative inventions. Thus, a misattribution may harm human inventiveness as true inventors could be discouraged by not being recognized for their contributions. 

Should AI-Generated Inventions be Patentable?

While concluding that AI is the sole inventor of an invention may be difficult as outlined in the previous section, what happens when AI is found to be the true, sole inventor? Society’s discussion on whether AI inventions should be patented focuses mostly on policy arguments. Dr. Thaler and Ryan Abbott, a law professor and the lead of Thaler’s legal team, have argued that allowing patent protection for AI-generated inventions will encourage developers to invest time in building more creative machines that will eventually lead to more inventions in the future.[14] They also argued that crediting AI for inventorship will protect the rights of human inventors.[15] For example, it cuts out the possibility of one person taking credit for another’s invention, which often happens when students participate in university research but are overlooked on patent applications.[16] Without patent applicability, the patent system’s required disclosure of inventions, it is very likely that owners of AI will keep inventions secret and privately benefit from the monopoly for however long it takes the rest of society to figure it out independently.[17] 

Some critics argue against Thaler and Abbott’s view. For one, they believe that AI at its current stage is not autonomous enough to be an inventor and human effort should be properly credited.[18] Even if AI can independently invent, its inventions should not be patentable because once it is, there will be too many patented inventions by AI in the same field owned by the same group of people who have access to these machines.[19] That will prevent smaller companies from entering into this field, having a negative effect on human inventiveness.[20]  Finally, there has been a concern that not granting patents to AI-invented creations will let AI owners keep the inventions as trade secrets, leading to a potential long-term monopoly. However, that might not be a big concern as inventions like the two created by DABUS are likely to be easily reverse engineered once they reach the market.[21]

Currently, Dr. Thaler plans to file appeals in each jurisdiction that has rejected his application and aims to seek copyright protection as an alternative in the U.S. It is questionable that Dr. Thaler will succeed on those appeals, but if he ever does, it will likely result in major changes to patent systems around the world. Even if most jurisdictions today forbid AI from being classified as an inventor, with the advancement of technology the need to address this issue will become more and more pressing as time goes on. 

Notes

[1] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).

[2] Ryan Abbott, July 2022 AIP Update Around the World, The Artificial Inventor Project (July 10, 2022), https://artificialinventor.com/867-2/.

[3] Id.

[4] South Africa’s patent law does not have a requirement on inventors being a natural person. Jordana Goodman, Homography of Inventorship: DABUS And Valuing Inventors, 20 Duke L. & Tech. Rev. 1, 17 (2022).

[5] Thaler, 43 F.4th at 1209, 1213.

[6] Goodman, supra note 4, at 10.

[7] Ryan Abbott, The Artificial Inventor Project, WIPO Magazine (Dec. 2019), https://www.wipo.int/wipo_magazine/en/2019/06/article_0002.html.

[8] Rita Matulionyte, AI as an Inventor: Has the Federal Court of Australia Erred in DABUS? 12 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3974219.

[9] Susan Krumplitsch et al. Can An AI System Be Named the Inventor? In Wake Of EDVA Decision, Questions Remain, DLA Piper (Sept. 13, 2019), https://www.dlapiper.com/en/us/insights/publications/2021/09/can-an-ai-system-be-named-the-inventor/#11

[10] 2109 Inventorship, USPTO, https://www.uspto.gov/web/offices/pac/mpep/s2109.html (last visited Oct. 8, 2022).

[11] Hybritech, Inc. v. Monoclonal Antibodies, Inc., 802 F.2d 1367, 1376 (Fed. Cir. 1986).

[12] Krumplitsch et al., supra note 9.

[13] Yosuke Watanabe, I, Inventor: Patent Inventorship for Artificial Intelligence Systems, 57 Idaho L. Rev. 473, 290.

[14] Abbott, supra note 2.

[15] Id.

[16] Goodman, supra note 4, at 21.

[17] Abbott, supra note 2.

[18] Matulionyte, supra note 8, at 10–14.

[19] Id. at 19.

[20] Id.

[21] Id. at 18.




I Think, Therefore I Am: The Battle for Intellectual Property Rights with Artificial Intelligence

Sara Pistilli, MJLST Staffer

Artificial intelligence (AI) is a computer or robot that is able to perform tasks that are usually done by humans because they require human judgement and intellect. Some AI can be self-learning, allowing them to learn and progress beyond their initial programming. This creates an issue of inventorship when AI creates patentable subject matter without any contribution from the original inventor of the AI system. This technological advancement has posed the larger question of whether AI qualifies as an “individual” under the United States Patent Act and whether people who create AI machines are able to claim the patent rights when the AI has created the patentable subject matter.

Artificial Intelligence “Inventors”

Patent law is continuously changing as technology expands and advances. While the law has advanced to accommodate innovative technology in the past, the introduction of AI has not been fully articulated. The United States Patent and Trademark Office (USPTO) opened up for comment on patenting AI inventions in 2019, however, it does not appear they asked for any further purpose other than to gather information from the public. The USPTO again asked for comment about patent eligibility jurisprudence as it related to specific technological areas, including AI in 2021. They gathered this information as a “study” and did not pursue any official action. The first official push to recognize AI as an inventor was by Dr. Stephen Thaler. Thaler built an AI machine called “DABUS,” and sought patent rights for the machine’s inventions. Thaler did not argue for DABUS to be the patent right holder, but rather the machine to be named the inventor with Thaler as the patent owner. Thaler’s insistence to name DABUS as the inventor complies with USPTO’s rulesregarding an inventor’s oath or declaration that accompanies a patent application.

United States’ Rulings

Thaler applied for patent rights over a food container and devices and methods for attracting enhanced attention. Both of these products were invented by his AI machine, DABUS. After applying for a U.S. patent, the USPTO rejected his application stating that U.S. law does not allow for artificial intelligence to be listed as an inventor on a patent application or patent. USPTO cited the Patent Act, stating an inventor must be a person, not a machine. USPTO stated that to allow “inventor” to include machines was too broad. Thaler requested reconsideration from the USPTO which was later denied. In 2021, Thaler appealed his rejection in the Eastern District of Virginia. Thaler failed to obtain patent rights with Judge Brinkema ruling only a human can be an inventor. Judge Brinkema relied heavily on statutory interpretation of the word “individual” which was performed by the Supreme Court in a 2012 case on the Torture Victim Protection Act. The Supreme Court had concluded that an “individual” referred to a “natural person.” Judge Brinkema further stated, that it will be up to Congress’ discretion on how they would like to alter patent law to accommodate for AI in the future. Thaler now has a pending appeal to the Court of Appeals.

International Rulings

While countries’ patent systems are independent of one another, they can be influenced based on technological and regulatory advancement happening in another country. Thaler has sought patent rights for DABUS’ two inventions discussed above in several countries including, but not limited to, the United Kingdom, Australia, and South Africa. Thaler obtained patent rights in South Africa, constituting a first in intellectual property history. Of note, however, is that South Africa’s patent system does not have a substantive patent examination system like other countries, nor do their patent laws define “inventor.” Thaler received a more persuasive ruling in Australia that may be able to effectuate change in other countries.  In 2021, Thaler’s patent application was denied in Australia. The Australian Patent Office (APO) stated that the language of the Patents Act was inconsistent with AI being treated as an inventor. Thaler appealed this decision to the Federal Court of Australia. Justice Beach ordered that this case must be remitted based on his ruling that AI can be a recognized inventor under the Australian Patents Act. Judge Beach further stated that AI cannot, however, be an applicant for a patent or an owner of a patent. It is with these reasons that Judge Beach requested reconsideration and remitted this case back to the Deputy Commissioner of the APO. The APO is now appealing this decision. Similar to the APO, the United Kingdom Intellectual Property Office (UKIPO) also pushed back against Thaler’s application for patent rights. In 2019, the UKIPO rejected Thaler’s application stating that the listing of DABUS as an inventor did not meet the requirements of the United Kingdom’s Patent Act. They stated a person must be identified as the inventor. Thaler appealed this rejection and was again denied by the UKIPO, who stated that a machine as an inventor does not allow for the innovation desired by patent rights. Thaler appealed again, to the England and Wales Patents Court, and was again denied patent rights. The judge stated that Thaler was using the Patent Act text out of context for his argument, ruling that the Patent Act cannot be construed to allow non-human inventors. In 2021, Thaler appealed this decision in the England and Wales Court of Appeals. He was again denied patent rights with all three judges agreeing that a patent is a right that can only be granted to a person and, that an inventor must be a person.

Future Prospects

Thaler currently has pending applications in several countries including Brazil, Canada, China, and Japan. The outcome of the appeal against the Federal Court of Australia’s decision on whether AI can be an inventor may prove crucial in helping to amend U.S. patent laws. Similarly, if more countries, in addition to South Africa, outright grant Thaler his patent rights, the U.S. may be forced to re-think their policies on AI-invented patentable subject matter.