Artificial Intelligence

I Think, Therefore I Am: The Battle for Intellectual Property Rights With Artificial Intelligence

Sara Pistilli, MJLST Staffer

Artificial intelligence (AI) is a computer or robot that is able to perform tasks that are usually done by humans because they require human judgement and intellect. Some AI can be self-learning, allowing them to learn and progress beyond their initial programming. This creates an issue of inventorship when AI creates patentable subject matter without any contribution from the original inventor of the AI system. This technological advancement has posed the larger question of whether AI qualifies as an “individual” under the United States Patent Act and whether people who create AI machines are able to claim the patent rights when the AI has created the patentable subject matter.

Artificial Intelligence “Inventors”

Patent law is continuously changing as technology expands and advances. While the law has advanced to accommodate innovative technology in the past, the introduction of AI has not been fully articulated. The United States Patent and Trademark Office (USPTO) opened up for comment on patenting AI inventions in 2019, however, it does not appear they asked for any further purpose other than to gather information from the public. The USPTO again asked for comment about patent eligibility jurisprudence as it related to specific technological areas, including AI in 2021. They gathered this information as a “study” and did not pursue any official action. The first official push to recognize AI as an inventor was by Dr. Stephen Thaler. Thaler built an AI machine called “DABUS,” and sought patent rights for the machine’s inventions. Thaler did not argue for DABUS to be the patent right holder, but rather the machine to be named the inventor with Thaler as the patent owner. Thaler’s insistence to name DABUS as the inventor complies with USPTO’s rulesregarding an inventor’s oath or declaration that accompanies a patent application.

United States’ Rulings

Thaler applied for patent rights over a food container and devices and methods for attracting enhanced attention. Both of these products were invented by his AI machine, DABUS. After applying for a U.S. patent, the USPTO rejected his application stating that U.S. law does not allow for artificial intelligence to be listed as an inventor on a patent application or patent. USPTO cited the Patent Act, stating an inventor must be a person, not a machine. USPTO stated that to allow “inventor” to include machines was too broad. Thaler requested reconsideration from the USPTO which was later denied. In 2021, Thaler appealed his rejection in the Eastern District of Virginia. Thaler failed to obtain patent rights with Judge Brinkema ruling only a human can be an inventor. Judge Brinkema relied heavily on statutory interpretation of the word “individual” which was performed by the Supreme Court in a 2012 case on the Torture Victim Protection Act. The Supreme Court had concluded that an “individual” referred to a “natural person.” Judge Brinkema further stated, that it will be up to Congress’ discretion on how they would like to alter patent law to accommodate for AI in the future. Thaler now has a pending appeal to the Court of Appeals.

International Rulings

While countries’ patent systems are independent of one another, they can be influenced based on technological and regulatory advancement happening in another country. Thaler has sought patent rights for DABUS’ two inventions discussed above in several countries including, but not limited to, the United Kingdom, Australia, and South Africa. Thaler obtained patent rights in South Africa, constituting a first in intellectual property history. Of note, however, is that South Africa’s patent system does not have a substantive patent examination system like other countries, nor do their patent laws define “inventor.” Thaler received a more persuasive ruling in Australia that may be able to effectuate change in other countries.  In 2021, Thaler’s patent application was denied in Australia. The Australian Patent Office (APO) stated that the language of the Patents Act was inconsistent with AI being treated as an inventor. Thaler appealed this decision to the Federal Court of Australia. Justice Beach ordered that this case must be remitted based on his ruling that AI can be a recognized inventor under the Australian Patents Act. Judge Beach further stated that AI cannot, however, be an applicant for a patent or an owner of a patent. It is with these reasons that Judge Beach requested reconsideration and remitted this case back to the Deputy Commissioner of the APO. The APO is now appealing this decision. Similar to the APO, the United Kingdom Intellectual Property Office (UKIPO) also pushed back against Thaler’s application for patent rights. In 2019, the UKIPO rejected Thaler’s application stating that the listing of DABUS as an inventor did not meet the requirements of the United Kingdom’s Patent Act. They stated a person must be identified as the inventor. Thaler appealed this rejection and was again denied by the UKIPO, who stated that a machine as an inventor does not allow for the innovation desired by patent rights. Thaler appealed again, to the England and Wales Patents Court, and was again denied patent rights. The judge stated that Thaler was using the Patent Act text out of context for his argument, ruling that the Patent Act cannot be construed to allow non-human inventors. In 2021, Thaler appealed this decision in the England and Wales Court of Appeals. He was again denied patent rights with all three judges agreeing that a patent is a right that can only be granted to a person and, that an inventor must be a person.

Future Prospects

Thaler currently has pending applications in several countries including Brazil, Canada, China, and Japan. The outcome of the appeal against the Federal Court of Australia’s decision on whether AI can be an inventor may prove crucial in helping to amend U.S. patent laws. Similarly, if more countries, in addition to South Africa, outright grant Thaler his patent rights, the U.S. may be forced to re-think their policies on AI-invented patentable subject matter.


With Lull in Deepfake Legislation, Questions Loom Large as Ever

Alex O’Connor, MJLST Staffer

In 2019 and 2020, remarkably realistic forged politically motivated content went viral on social media. The content, known as “deepfakes,” included photorealistic images of world leaders such as Kim Jong Un, Vladimir Putin, Matt Gaetz, and Barack Obama. Also in 2019, a woman was conned out of nearly $300,000 by a scammer posing as a U.S. Navy Admiral using deepfake technology. These stories, and others, catapulted online forgeries to the front page of newspapers, as observers were both intrigued and frightened by this novel technology. 

While the potential for deepfake technology to deceive political leaders and provoke conflict helped bring deepfakes into the public consciousness, individuals — and particularly women — have been victimized by deepfakes since as early as 2017. Even today, research suggests that 96% of deepfake content available online is nonconsensual pornography. While early targets of deepfakes were mostly celebrity women, nonpublic figures have been victimized as well. Indeed, deepfake technology is becoming increasingly more sophisticated and user friendly, giving anyone inclined the ability to forge pornography using a woman’s photograph transposed over explicit content in order to harass, blackmail, or embarrass. For example, one deepfake app allowed users to strip a subject’s clothing from photos, creating a photorealistic nude image. After widespread outcry, the developers of the app shut it down only hours after its launch. 

The political implications of deepfakes alarmed lawmakers as well, and congress leapt into action. Beginning in 2020, the National Defense Authorization Act (NDAA) included a requirement that the Department of Homeland Security (DHS) issue an annual report on the threats that deepfake technology poses for national security. The following year, the NDAA broadened the DHS report to include threats to individuals as well. Another piece of legislation, the Identifying Outputs of Generative Adversarial Networks Act, directed the National Institute of Standards and Technology to support research for developing standards related to deepfake content. 

A much more controversial bill went beyond mere research and committees. The DEEP FAKES Accountability Act would require any producer of deepfake content to include a watermark over the image notifying viewers that it was a forgery. If the content contains “sexual content of a visual nature,” producers of unwatermarked content would be subject to criminal penalties. Meanwhile, anyone who merely violates the watermark requirement would be subject to civil penalties of $150,000 per image. 

While many have celebrated the bill for its potential to protect individuals and the political process, others have criticized it as an overbroad and ineffective infringement on free speech. Producers of political satire in particular may find the watermark requirement a joke killer. Further, some worry that the pace of deepfake technology development could expose websites to interminable litigation as the proliferation of deepfake content renders enforcement of the act on platforms impossible. Originally introduced in June 2019 by Representative Yvette Clarke, [D-NY-9], the bill languished in committee. Representative Clarke reintroduced the bill in April of this year before the 117th Congress, and it is currently being considered by three committees: Energy and Commerce, Judiciary, and Homeland Security.

The flurry of legislative activity at the federal level was mirrored by engagement by states as well. Five states have enacted deepfake legislation to combat political interference, nonconsensual pornography, or both, while another four states have introduced similar legislation. As with the federal legislation, opposition to the state deepfake laws is grounded in First Amendment concerns, with defenders of civil liberties such as the ACLU sending a letter to the California governor asking him to veto the legislation. He declined.

Deepfake related legislative activity has stalled during the Coronavirus pandemic, but the questions around how to craft legislation that strikes the right balance between privacy and dignity on the one hand, and free expression and satire on the other loom large as ever. These questions will only become more relevant with the rapid growth of deepfake technology and growing concerns about governmental overreach in good-faith efforts to protect citizens’ privacy and the democratic process.