New Technology

A New Iron Age: New Developments in Battery Technology

Poojan Thakrar, MJLST Staffer

Introduction

In coming years, both Great River Energy and Xcel Energy are installing pilot projects of a new iron-air battery technology.[1] Both utilities are working with Boston-based company Form Energy. Great River Energy, which is Minnesota’s second-largest energy provider, plans to install a 1.5-megawatt battery next to its natural gas plant in Cambridge, MN. Xcel Energy, the state’s largest energy provider, will deploy a 10-megawatt battery in Becker, MN and Pueblo, CO. The batteries can store energy for up to 100 hours, which the utilities emphasize as crucial due to their ability to provide power during multi-day blizzards. The projects may be online as early as 2025, Form Energy says.[2]

The greater backdrop for these battery projects is Minnesota’s new carbon-free targets. Earlier this year, with new control of both chambers, Minnesota Democrats passed a bill mandating 100 percent carbon-free energy by 2040.[3] Large utility-scale batteries such as the ones proposed by Great River Energy and Xcel can play an important role in that transition by mitigating intermittency concerns often associated with renewables.

Technology

This technology may be uniquely suited for a future in which utilities rely more heavily on batteries. While this technology is less energy-dense than traditional lithium-ion batteries, the iron used at the heart of the battery is more abundant than lithium. [4] This allows utilities to sidestep many of the concerns associated with lithium and other minerals required in traditional batteries.[5] Iron-air batteries also tend to be heavier and larger than lithium-ion batteries that store equivalent energy. For batteries in phones, laptops, and cars, weight and volume are important features to keep in mind. However, this new technology could help accelerate uptake of large utility-scale batteries, where weight and volume are of less concern.

If your high school chemistry is rust-y, take a look at this graphic by Form Energy. When discharging electricity, the battery ‘inhales’ oxygen from the air and converts pure iron into rust. This allows electrons to flow, as seen on the right side of the graphic. As the battery is charged, the rust ‘exhales’ oxygen and converts back to iron. The battery relies on this reversible rust cycle to ultimately store its electricity. Form Energy claims that its battery can store energy at one-tenth the cost of lithium-ion batteries.[6]

Administrative Procedures

Xcel has recently filed a petition with the Minnesota Public Utilities Commission (MPUC), which has jurisdiction over investor-owned utilities such as Xcel.[7] The March 6th petition seeks to recover the cost of the pilot battery project. This request was made pursuant to Minnesota statute 216B.16, subd. 7e, which allows a utility to recover costs associated with energy storage system pilot projects.

In addition, the pilot project qualifies for a standard 30 percent investment tax credit (ITC) as well as a 10 percent bonus under the federal Inflation Reduction Act because Becker, MN is an “energy community.”  An “energy community” is an area that formerly had a coal mine or coal-fired power plant that has since closed. Becker is home to the Sherco coal-fired power plant, which has been an important part of that city’s economy for decades. The pilot may also receive an additional 10 percent bonus through the IRA because of the battery’s domestic materials. Any cost recovery through a rider would only be for costs beyond applicable tax grants and potential future grant awards. The MPUC has opened a comment period until April 21st, 2023. The issue at hand is: should the Commission approve the Long Duration Energy Storage System Pilot proposed by Xcel Energy in its March 6, 2023 petition? [8]

As a member-owned cooperative, Great River Energy does not need approval from the MPUC to recover the price of the battery project through its rates.

Conclusion

Ultimately, this is a bet on an innovative technology by two of the largest electricity providers in the state. If approved by the MPUC, ratepayers will foot the bill for this new technology. However, new technology and large investment projects are crucial for a cleaner and more resilient energy future.

Notes

[1] See Kirsti Marohn, ‘Rusty’ batteries could hold key to Minnesota’s carbon-free power future, MPR News (Feb. 10, 2023), https://www.mprnews.org/story/2023/02/10/rusty-batteries-could-hold-key-to-carbonfree-power-future. See alsoRyan Kennedy, Retired coal sites to host multi-day iron-air batteries, PV Magazine (Jan. 26, 2023) https://pv-magazine-usa.com/2023/01/26/retired-coal-sites-to-host-multi-day-iron-air-batteries/.

[2] Andy Colthorpe, US utility Xcel to put Form Energy’s 100-hour iron-air battery at retiring coal power plant sites, Energy Storage News (Jan. 27, 2023), https://www.energy-storage.news/us-utility-xcel-to-put-form-energys-100-hour-iron-air-battery-at-retiring-coal-power-plant-sites/.

[3] Dana Ferguson, Walz signs carbon-free energy bill, prompting threat of lawsuit, MPR News (Feb. 7, 2023), https://www.mprnews.org/story/2023/02/07/walz-signs-carbonfree-energy-bill-prompting-threat-of-lawsuit.

[4] Form Energy Partners with Xcel Energy on Two Multi-day Energy Storage Projects, BusinessWire (Jan. 26, 2023), https://www.businesswire.com/news/home/20230126005202/en/Form-Energy-Partners-with-Xcel-Energy-on-Two-Multi-day-Energy-Storage-Projects

[5]See Amit Katwala, The Spiralling Environmental Cost of Our Lithium Battery Addiction, Wired UK (May 8, 2018), https://www.wired.co.uk/article/lithium-batteries-environment-impact/. See also The Daily, The Global Race to Mine the Metal of the Future, New York Times (Mar. 18, 2022), https://www.nytimes.com/2022/03/18/podcasts/the-daily/cobalt-climate-change.html

[6] https://formenergy.com/technology/battery-technology/ (last visited Apr. 6, 2023)

[7] Petition Long-Duration Energy Storage System Pilot Project at Sherco, page 4, Minnesota PUC (Mar 6, 2023),

https://www.edockets.state.mn.us/edockets/searchDocuments.do?method=showPoup&documentId={8043C886-0000-CC18-A0DF-1A2C7EA08FA1}&documentTitle=20233-193670-01

[8] Notice of Comment Period, Minnesota PUC (Mar 21, 2023),

https://www.edockets.state.mn.us/edockets/searchDocuments.do?method=showPoup&documentId={90760487-0000-C415-89F7-FDE36D038B2C}&documentTitle=20233-194113-01


Mental Health Telehealth Services May Not Be Protecting Your Data

Tessa Wright, MJLST Staffer

The COVID-19 pandemic changed much about our daily lives, and nowhere have those changes been more visible than in the healthcare industry. During the pandemic, there were overflowing emergency rooms coupled with doctor shortages.[1] In-person medical appointments were canceled, and non-emergency patients had to wait months for appointments.[2] In response, the use of telehealth services began to increase rapidly.[3] In fact, one 2020 study found that telehealth visits accounted for less than 1% of health visits prior to the pandemic and increased to as much as 80% of visits when the pandemic was at its peak.[4] And, while the use of telehealth services has decreased slightly in recent years, it seems as though it is likely here to stay. Nowhere has the use of telehealth services been more prevalent than in mental health services.[5] Indeed, as of 2022, telehealth still represented over 36% of outpatient mental health visits.[6] Moreover, a recent study found that since 2020, over one in three mental health outpatient visits have been delivered by telehealth.[7] And while this increased use in telehealth services has helped make mental health services more affordable and accessible to many Americans, this shift in the way healthcare is provided also comes with new legal concerns that have yet to be fully addressed.

Privacy Concerns for Healthcare Providers

One of the largest concerns surrounding the increased use of telehealth in mental health services is privacy. There are several reasons for this. The primary concern has been due to the fact that telehealth takes place over the phone or via personal computers. When using personal devices, it is nearly impossible to ensure HIPAA compliance. However, the majority of healthcare providers now offer telehealth options that connect directly to their private healthcare systems, which allows for more secure data transmission.[8] While there are still concerns surrounding this issue, these secure servers have helped mitigate much of the concern.[9]

Privacy Concerns with Mental Health Apps

The other privacy concern surrounding the use of telehealth services for mental health is a little more difficult to address. This concern comes from the increased use of mental health apps. Mental health apps are mobile apps that allow users to access online talk therapy and psychiatric care.[10] With the increased use of telehealth for mental health services, there has also been an increase in the use of these mental health apps. Americans are used to their private medical information being protected by the Health Insurance Portability and Accountability Act (HIPAA).[11] HIPAA is a federal law that creates privacy rules for our medical records and other individually identifiable health information during the flow of certain health care transactions.[12] But HIPAA wasn’t designed to handle modern technology.[13] The majority of mental health apps are not covered by HIPAA rules, meaning that these tech companies can sell the private health data from their apps to third parties, with or without consent.[14] In fact, a recent study that analyzed 578 mental health-related apps found that nearly half (44%) of the apps shared users’ personal health information with third parties.[15] This personal health information can include psychiatric diagnoses and medication prescriptions, as well as other identifiers including age, gender, ethnicity, religion, credit score, etc.[16]

In fact, according to a 2022 study, a popular therapy app, BetterHelp, was among the worst offenders in terms of privacy.[17] “BetterHelp has been caught in various controversies, including a ‘bait and switch’ scam where it advertised therapists that weren’t actually on its service, poor quality of care (including trying to provide gay clients with conversion therapy), and paying YouTube influencers if their fans sign up for therapy through the app.”[18]

An example of information that does get shared is the intake questionnaire.[19] An intake questionnaire needs to be filled out on BetterHelp, or other therapy apps, in order for the customer to be matched with a provider.[20] The answers to these intake questionnaires were specifically found to have been shared by BetterHelp with an analytics company, along with the approximate location and device of the user.[21]

Another example of the type of data that is shared is metadata.[22] BetterHelp can share information about how long someone uses the app, how long the therapy sessions are, how long someone spends sending messages on the app, what times someone logs into the app, what times someone sends a message or speaks to their therapists, the approximate location of the user, how often someone opens the app, and so on.[23] According to the ACLU, data brokers, Facebook, and Google were found to be among the recipients of other information shared from BetterHelp.[24]

It is also important to note that deleting an account may not remove all of your personal information, and there is no way of knowing what data will remain.[25] It remains unclear how long sensitive information that has been collected and retained could be available for use by the app.

What Solutions Are There?

The U.S. Department of Health and Human Services recently released updated guidance on HIPAA, confirming that the HIPAA Privacy Rule does not apply to most health apps because they are not “covered entities” under the law.[26]  Additionally, the FDA put out guidance saying that it is going to use its enforcement discretion when dealing with mental health apps.[27] This means that if the privacy risk seems to be low, the FDA is not going to enforce or chase these companies.[28]

Ultimately, if mental telehealth services are here to stay, HIPAA will need to be expanded to cover the currently unregulated field of mental health apps. HIPAA and state laws would need to be specifically amended to include digital app-based platforms as covered entities.[29] These mental health apps are offering telehealth services, similar to any healthcare provider that is covered by HIPAA. Knowledge that personal data is being shared so freely by mental health apps often leads to distrust, and due to those privacy concerns, many users have lost confidence in them. In the long run, regulatory oversight would increase the pressure on these companies to show that their service can be trusted, potentially increasing their success by growing their trust with the public as well.

Notes

[1] Gary Drenik, The Future of Telehealth in a Post-Pandemic World, Forbes, (Jun. 2, 2022), https://www.forbes.com/sites/garydrenik/2022/06/02/the-future-of-telehealth-in-a-post-pandemic-world/?sh=2ce7200526e1.

[2] Id.

[3] Id.

[4] Madjid Karimi, et. al., National Survey Trends in Telehealth Use in 2021: Disparities in Utilization and Audio vs. Video Services, Office of Health Policy (Feb. 1, 2022).

[5] Shreya Tewari, How to Navigate Mental Health Apps that May Share Your Data, ACLU (Sep. 28, 2022).

[6] Justin Lo, et. al., Telehealth has Played an Outsized Role Meeting Mental Health Needs During the Covid-19 Pandemic, Kaiser Family Foundation, (Mar. 15, 2022), https://www.kff.org/coronavirus-covid-19/issue-brief/telehealth-has-played-an-outsized-role-meeting-mental-health-needs-during-the-covid-19-pandemic/.

[7] Id.

[8] Supra note 1.

[9] Id.

[10] Heather Landi, With Consumers’ Health and Privacy on the Line, do Mental Wellness Apps Need More Oversight?, Fierce Healthcare, (Apr. 21, 2021), https://www.fiercehealthcare.com/tech/consumers-health-and-privacy-line-does-digital-mental-health-market-need-more-oversight.

[11] Peter Simons, Your Mental Health Information is for Sale, Mad in America, (Feb. 20, 2023), https://www.madinamerica.com/2023/02/mental-health-information-for-sale/.

[12] Supra note 5.

[13] Supra note 11.

[14] Id.

[15] Deb Gordon, Using a Mental Health App? New Study Says Your Data May Be Shared, Forbes, (Dec. 29, 2022), https://www.forbes.com/sites/debgordon/2022/12/29/using-a-mental-health-app-new-study-says-your-data-may-be-shared/?sh=fe47a5fcad2b.

[16] Id.

[17] Supra note 11.

[18] Id.

[19] Supra note 5.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Supra note 5.

[26] Id.

[27] Supra note 10.

[28] Id.

[29] Supra note 11.


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Only Humans Are Allowed: Federal Circuit Says No to “AI Inventors”

Vivian Lin, MJLST Staffer

On August 5, 2022, the U.S. Court of Appeals for the Federal Circuit affirmed the U.S. District for the Eastern Division of Virginia’s decision that artificial intelligence (AI) cannot be an “inventor” on a patent application,[1] joining many other jurisdictions in confirming that only a natural person can be an “inventor”.[2] Currently, South Africa remains the only jurisdiction that has granted Dr. Stephan Thaler’s patent naming DABUS, an AI, as the sole inventor of two patentable inventions.[3] With the release of the Federal Circuit’s opinion refusing to recognize AI as an inventor, Dr. Thaler’s fight to credit AI for inventions reaches a plateau. 

DABUS, formally known as Device for the Autonomous Bootstrapping of Unified Sentience, is an AI-based creativity machine created by Dr. Stephan Thaler, the founder of the software company Imagination Engine Inc. Dr. Thaler claimed that DABUS independently invented two patentable inventions: The Factual Container and the Neural Flame. For the past few years, Dr. Thaler has been in battle with patent offices around the world trying to receive patents for these two inventions. Until this date, every patent office, except one,[4] has refused to grant the patents on the grounds that the applications do not name a natural person as the inventor. 

The inventor of a patent being a natural person is a legal requirement in many jurisdictions. The recent Federal Circuit opinion ruled mainly based on statutory interpretation, arguing that the text is clear in requiring a natural person to be the inventor.[5] Though there are many jurisdictions that have left the term “inventor” undefined, it seems to be a general agreement that an inventor should be a natural person.[6]

Is DABUS the True Inventor?

There are many issues centered around AI inventorship. The first is whether AI can be the true inventor, and subsequently take credit for an invention, even though a human created the AI itself. Here it becomes necessary to inquire into whether there was human intervention during the discovery process, and if so, what type of intervention was involved. It might be the case that a natural human was the actual inventor of a product while AI only assisted in carrying out that idea. For example, when a developer designed the AI with a particular question in mind and carefully selected the training data, the AI is only assisting the invention while the developer is seen as the true inventor.[7] In analyzing the DABUS case, Dr. Rita Matulionyte, a senior lecturer at Macquarie Law School in Australia and an expert in intellectual property and information technology law, has argued that DABUS is not the true inventor because Dr. Thaler’s role in the inventions was unquestionable, assuming he formulated the problem, developed the algorithm, created the training date, etc.[8] 

However, it is a closer question when both AI and human effort are important for the invention. For example, AI might identify the compound for a new drug, but to conclude the discovery, a scientist still has to test the compound.[9] The U.S. patent law requires that the “inventor must contribute to the conception of the invention.”[10] Further defined, conception is “the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”[11] In the drug discovery scenario, it is difficult to determine who invented the new drug. Neither the AI developers nor the scientists fit the definition of “inventor”: The AI developers and trainers only built and trained the algorithm without any knowledge of the potential discovery while the scientists only confirmed the final discovery without contributing to the development of the algorithm or the discovery of the drug.[12] In this scenario, it is likely the AI did the majority of the work and made the important discovery itself, and should thus be the inventor of the new compound.[13]

The debate on who is the true inventor is important because mislabeling the inventor can cause serious consequences. Legally, improper inventorship attribution may cause a patent application to be denied, or it may lead to the later invalidation of a granted patent. Practically speaking, human inventors are able to take credit for their invention and that honor comes with recognition which may incentive future creative inventions. Thus, a misattribution may harm human inventiveness as true inventors could be discouraged by not being recognized for their contributions. 

Should AI-Generated Inventions be Patentable?

While concluding that AI is the sole inventor of an invention may be difficult as outlined in the previous section, what happens when AI is found to be the true, sole inventor? Society’s discussion on whether AI inventions should be patented focuses mostly on policy arguments. Dr. Thaler and Ryan Abbott, a law professor and the lead of Thaler’s legal team, have argued that allowing patent protection for AI-generated inventions will encourage developers to invest time in building more creative machines that will eventually lead to more inventions in the future.[14] They also argued that crediting AI for inventorship will protect the rights of human inventors.[15] For example, it cuts out the possibility of one person taking credit for another’s invention, which often happens when students participate in university research but are overlooked on patent applications.[16] Without patent applicability, the patent system’s required disclosure of inventions, it is very likely that owners of AI will keep inventions secret and privately benefit from the monopoly for however long it takes the rest of society to figure it out independently.[17] 

Some critics argue against Thaler and Abbott’s view. For one, they believe that AI at its current stage is not autonomous enough to be an inventor and human effort should be properly credited.[18] Even if AI can independently invent, its inventions should not be patentable because once it is, there will be too many patented inventions by AI in the same field owned by the same group of people who have access to these machines.[19] That will prevent smaller companies from entering into this field, having a negative effect on human inventiveness.[20]  Finally, there has been a concern that not granting patents to AI-invented creations will let AI owners keep the inventions as trade secrets, leading to a potential long-term monopoly. However, that might not be a big concern as inventions like the two created by DABUS are likely to be easily reverse engineered once they reach the market.[21]

Currently, Dr. Thaler plans to file appeals in each jurisdiction that has rejected his application and aims to seek copyright protection as an alternative in the U.S. It is questionable that Dr. Thaler will succeed on those appeals, but if he ever does, it will likely result in major changes to patent systems around the world. Even if most jurisdictions today forbid AI from being classified as an inventor, with the advancement of technology the need to address this issue will become more and more pressing as time goes on. 

Notes

[1] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).

[2] Ryan Abbott, July 2022 AIP Update Around the World, The Artificial Inventor Project (July 10, 2022), https://artificialinventor.com/867-2/.

[3] Id.

[4] South Africa’s patent law does not have a requirement on inventors being a natural person. Jordana Goodman, Homography of Inventorship: DABUS And Valuing Inventors, 20 Duke L. & Tech. Rev. 1, 17 (2022).

[5] Thaler, 43 F.4th at 1209, 1213.

[6] Goodman, supra note 4, at 10.

[7] Ryan Abbott, The Artificial Inventor Project, WIPO Magazine (Dec. 2019), https://www.wipo.int/wipo_magazine/en/2019/06/article_0002.html.

[8] Rita Matulionyte, AI as an Inventor: Has the Federal Court of Australia Erred in DABUS? 12 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3974219.

[9] Susan Krumplitsch et al. Can An AI System Be Named the Inventor? In Wake Of EDVA Decision, Questions Remain, DLA Piper (Sept. 13, 2019), https://www.dlapiper.com/en/us/insights/publications/2021/09/can-an-ai-system-be-named-the-inventor/#11

[10] 2109 Inventorship, USPTO, https://www.uspto.gov/web/offices/pac/mpep/s2109.html (last visited Oct. 8, 2022).

[11] Hybritech, Inc. v. Monoclonal Antibodies, Inc., 802 F.2d 1367, 1376 (Fed. Cir. 1986).

[12] Krumplitsch et al., supra note 9.

[13] Yosuke Watanabe, I, Inventor: Patent Inventorship for Artificial Intelligence Systems, 57 Idaho L. Rev. 473, 290.

[14] Abbott, supra note 2.

[15] Id.

[16] Goodman, supra note 4, at 21.

[17] Abbott, supra note 2.

[18] Matulionyte, supra note 8, at 10–14.

[19] Id. at 19.

[20] Id.

[21] Id. at 18.




Extending Trademark Protections to the Metaverse

Alex O’Connor, MJLST Staffer

After a 2020 bankruptcy and steadily decreasing revenue that the company attributes to the Coronavirus pandemic, Chuck E. Cheese is making the transition to a pandemic-proof virtual world. Restaurant and arcade center Chuck E. Cheese is hoping to revitalize its business model by entering the metaverse. In February, Chuck E. Cheese filed two intent to use trademark filings with the USPTO. The trademarks were filed under the names “CHUCK E. VERSE” and “CHUCK E. CHEESE METAVERSE”. 

Under Section 1 of the Lanham Act, the two most common types of applications for registration of a mark on the Principal Register are (1) a use based application for which the applicant must have used the mark in commerce and (2) an “intent to use” (ITU) based application for which the applicant must possess a bona fide intent to use the mark in trade in the near future. Chuck E. Cheese has filed an ITU application for its two marks.

The metaverse is a still-developing virtual and immersive world that will be inhabited by digital representations of people, places, and things. Its appeal lies in the possibility of living a parallel, virtual life. The pandemic has provoked a wave of investment into virtual technologies, and brands are hurrying to extend protection to virtual renditions of their marks by registering specifically for the metaverse. A series of lawsuits related to alleged infringing use of registered marks via still developing technology has spooked mark holders into taking preemptive action. In the face of this uncertainty, the USPTO could provide mark holders with a measure of predictability by extending analogue protections of marks used in commerce to substantially similar virtual renditions. 

Most notably, Hermes International S.A. sued the artist Mason Rothschild for both infringement and dilution for the use of the term “METABIRKINS” in his collection of Non-Fungible Tokens (NFTs). Hermes alleges that the NFTs are confusing customers about the source of the digital artwork and diluting the distinctive quality of Hermes’ popular line of handbags. The argument continues that the term “META” is merely a generic term that simply means “BIRKINS in the metaverse,” and Rothschild’s use of the mark constitutes trading on Hermes’ reputation as a brand.  

Many companies and individuals are rushing to the USPTO to register trademarks for their brands to use in virtual reality. Household names such as McDonalds (“MCCAFE” for a virtual restaurant featuring actual and virtual goods), Panera Bread (“PANERAVERSE” for virtual food and beverage items), and others have recently filed applications for registration with the USPTO for virtual marks. The rush of filings signals a recognition among companies that the digital marketplace presents countless opportunities for them to expand their brand awareness, or, if they’re not careful, for trademark copycats to trade on their hard-earned good will among consumers.

Luckily for Chuck E. Cheese and other companies that seek to extend their brands into the metaverse, trademark protection in the metaverse is governed by the same set of rules governing regular analogue trademark protection. That is, the mark the company is seeking to protect must be distinctive, it must be used in commerce, and it must not be covered by a statutory bar to protection. For example, if a mark’s exclusive use by one firm would leave other firms at a significant non-reputation related disadvantage, the mark is said to be functional, and it can’t be protected. The metaverse does not present any additional obstacles to trademark protection, and so as long as Chuck E. Cheese eventually uses its two marks,it will enjoy their exclusive use among consumers in the metaverse. 

However, the relationship between new virtual marks and analogue marks is a subject of some uncertainty. Most notably, should a mark find broad success and achieve fame in the metaverse, would that virtual fame confer fame in the real world? What will trademark expansion into the metaverse mean for licensing agreements? Clarification from the USPTO could help put mark holders at ease as they venture into the virtual market. 

Additionally, trademarks in the metaverse present another venue in which trademark trolls can attempt to register an already well known mark with no actual intent to use it-—although the requirement under U.S. law that mark holders either use or possess a bona fide intent to use the mark can help mitigate this problem. Finally, observers contend that the expansion of commerce into the virtual marketplace will present opportunities for copycats to exploit marks. Already, third parties are seeking to register marks for virtual renditions of existing brands. In response, trademark lawyers are encouraging their clients to register their virtual marks as quickly as possible to head off any potential copycat users. The USPTO could ensure brands’ security by providing more robust protections to virtual trademarks based on a substantially similar, already registered analogue trademark.


“I Don’t Know What To Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Breaking the Tech Chain To Slow the Growth of Single-Family Rentals

Sarah Bauer, MJLST Staffer

For many of us looking to buy our first homes during the pandemic, the process has ranged from downright comical to disheartening. Here in Minnesota, the Twin Cities have the worst housing shortage in the nation, a problem that has both Republican and Democratic lawmakers searching for solutions to help both renters and buyers access affordable housing. People of color are particularly impacted by this shortage because the Twin Cities are also home to the largest racial homeownership gap in the nation

Although these issues have complex roots, tech companies and investors aren’t helping. The number of single-family rentals (SFR) units — single-family homes purchased by investors and rented out for profit — have risen since the great Recession and exploded over the course of the pandemic. In the Twin Cities, black neighborhoods have been particularly targeted by investors for this purpose. In 2021, 8% of the homes sold in the Twin Cities metro were purchased by investors, but investors purchased homes in BIPOC-majority zip codes at nearly double the rate of white-majority neighborhoods. Because property ownership is a vehicle for wealth-building, removing housing stock from the available pool essentially transfers the opportunity to build wealth from individual homeowners to investors who can both profit from rents as well as the increased value of the property at sale. 

It’s not illegal for tech companies and investors to purchase and rent out single-family homes. In certain circumstances, it may actually be desirable for them to be involved in the market. If you are a seller that needs to sell your home before buying a new one, house-flipping tech companies can get you out of your home faster by purchasing the home without a showing, an inspection, or contingencies. And investors purchasing single-family homes can provide a floor to the market during slowdowns like the Great Recession, a service which benefits homeowners as well as the investors themselves. But right now we have the opposite problem: not enough homes available for first-time owner-occupants. Assuming investor-ownership is becoming increasingly undesirable, what can we do about it? To address the problem, we need to understand how technology and investors are working in tandem to increase the number of single-family rentals.

 

The Role of House-Flipping Technology and iBuyers

The increase in SFRs is fueled by investors of all kinds: corporations, local companies, and wealthy individuals. For smaller players, recent developments in tech have made it easier for them to flip their properties. For example, a recent CityLab article discussed FlipOS, “a platform that helps investors prioritize repairs, access low-interest loans, and speed the selling process.” Real estate is a decentralized industry, and such platforms make the process of buying single-family homes and renting them out faster. Investors see this as a benefit to the community because rental units come onto the market faster than they otherwise would. But this technology also gives such investors a competitive advantage over would-be owner-occupiers.

The explosion of iBuying during the pandemic also hasn’t helped. iBuyers — short for “instant buyers” — use AI to generate automated valuation models to give the seller an all-cash, no contingency offer. This enables the seller to offload their property quickly, while the iBuyer repairs, markets, and re-sells the home. iBuyers are not the long-term investors that own SFRs, but the house-flippers that facilitate the transfer of property between long-term owners.

iBuyers like Redfin, Offerpad, Opendoor (and formerly Zillow) have increasingly purchased properties in this way over the course of the pandemic. This is true particularly in Sunbelt states, which have a lot of new construction of single-family homes that are easier to accurately price. As was apparent from the demise of Zillow’s iBuying program, these companies have struggled with profitability because home values can be difficult to predict. The aspects of real estate transactions that slow down traditional homebuyers (title check, inspections, etc…) also slow down iBuyers. So they can buy houses fast by offering all-cash offers with no inspection, but they can’t really offload them faster than another seller.

To the degree that iBuyers in the market are a problem, that problem is two-fold. First, they make it harder for first-time homeowners to purchase homes by offering cash and waiving inspections, something few first-time homebuyers can afford to offer. The second problem is a bigger one: iBuyers are buying and selling a lot of starter homes to large, non-local investors rather than back to owner-occupants or local landlords.

 

Transfer from Flippers to Corporate Investors

iBuyers as a group sell a lot of homes to corporate landlords, but it varies by company. After Zillow discontinued its iBuying program, Bloomberg reported that the company planned to offload 7,000 homes to real estate investment trusts (REITs). Offerpad sells 10-20% of its properties to institutional investors. Opendoor claims that it sells “the vast majority” of its properties to owner-occupiers. RedfinNow doesn’t sell to REITs at all. Despite the variation between companies, iBuyers on the whole sold one-fifth of their flips to institutional investors in 2021, with those sales more highly concentrated in neighborhoods of color. 

REITs allow firms to pool funds, buy bundles of properties, and convert them to SFRs. In addition to shrinking the pool of homes available for would-be owner-occupiers, REITs hire or own corporate entities to manage the properties. Management companies for REITs have increasingly come under fire for poor management, aggressively raising rent, and evictions. This is as true in the Twin Cities as elsewhere. Local and state governments do not always appear to be on the same page regarding enforcement of consumer and tenant protection laws. For example, while the Minnesota AG’s office filed a lawsuit against HavenBrook Homes, the city of Columbia Heights renewed rental occupancy licenses for the company. 

 

Discouraging iBuyers and REITs

If we agree as a policy matter that single-family homes should be owner-occupied, what are some ways to slowdown the transfer of properties and give traditional owner-occupants a fighting chance? The most obvious place to start is by considering a ban on iBuyers and investment firms from acquiring homes. The Los Angeles city council voted late last year to explore such a ban. Canada has voted to ban most foreigners from buying homes for two years to temper its hot real estate market, a move which will affect iBuyers and investors.

  Another option is to make flipping single-family homes less attractive for iBuyers. A state lawmaker from San Diego recently proposed Assembly Bill 1771, which would impose an additional 25% tax on the gain from a sale occurring within three years of a previous sale. This is a spin on the housing affordability wing of Bernie Sanders’s 2020 presidential campaign, which would have placed a 25% house-flipping tax on sellers of non-owner-occupied property, and a 2% empty homes tax on property of vacant, owned homes. But If iBuyers arguably provide a valuable service to sellers, then it may not make sense to attack iBuyers across the board. Instead, it may make more sense to limit or heavily tax sales from iBuyers to investment firms, or the opposite, reward iBuyers with a tax break for reselling homes to owner-occupants rather than to investment firms.

It is also possible to make investment in single-family homes less attractive to REITs. In addition to banning sales to foreign investors, the Liberal Party of Canada pitched an “excessive rent surplus” tax on post-renovation rent surges imposed by landlords. In addition to taxes, heavier regulation might be in order. Management companies for REITs can be regulated more heavily by local governments if the government can show a compelling interest reasonably related to accomplishing its housing goals. Whether REIT management companies are worse landlords than mom-and-pop operations is debatable, but the scale at which REITs operate should on its own make local governments think twice about whether it is a good idea to allow so much property to transfer to investors. 

Governments, neighborhood associations, and advocacy groups can also engage in homeowner education regarding the downsides of selling to an iBuyer or investor. Many sellers are hamstrung by needing to sell quickly or to the highest bidder, but others may have more options. Sellers know who they are selling their homes to, but they have no control over to whom that buyer ultimately resells. If they know that an iBuyer is likely to resell to an investor, or that an investor is going to turn their home into a rental property, they may elect not to sell their home to the iBuyer or investor. Education could go a long way for these homeowners. 

Lastly, governments themselves could do more. If they have the resources, they could create a variation on Edina’s Housing Preservation program, where homeowners sell their house to the City to preserve it as an affordable starter home. In a tech-oriented spin of that program, the local government could purchase the house to make sure it ends up in the hands of another owner-occupant, rather than an investor. Governments could decline to sell to iBuyers or investors single-family homes seized through tax forfeitures. Governments can also encourage more home-building by loosening zoning restrictions. More homes means a less competitive housing market, which REIT defenders say will make the single-family market less of an attractive investment vehicle. Given the competitive advantage of such entities, it seems unlikely that first-time homebuyers could be on equal footing with investors absent such disincentives.


A Solution Enabled by the Conflict in Ukraine, Cryptocurrency Regulation, and the Energy Crisis Could Address All Three Issues

Chase Webber, MJLST Staffer

This post focuses on two political questions reinvigorated by Vladimir Putin’s invasion of Ukraine: the energy crisis and the increasing popularity and potential for blockchain technology such as cryptocurrency (“crypto”).  The two biggest debates regarding blockchain may be its extraordinarily high use of energy and the need for regulation.  The emergency of the Ukraine invasion presents a unique opportunity for political, crypto, and energy issues to synergize – each with solutions and positive influence for the others.

This post will compare shortcomings in pursuits for environmentalism and decentralization.  Next, explain how a recent executive order is an important turning point towards developing sufficient peer-to-peer technology for effective decentralization.  Finally, suggest that a theoretical decentralized society may be more well-equipped to address the critical issues of global politics, economy, and energy use, and potentially others.

 

Relationship # 1: The Invasion and The Energy Crisis

Responding to the invasion, the U.S. and other countries have sanctioned Russia in ways that are devastating Russia’s economy, including by restricting the international sale of Russian oil.  This has dramatic implications for the interconnected global economy.  Russia is the second-largest oil exporter; cutting Russia out of the picture sends painful ripples across our global dependency on fossil fuel.

Without “beating a dead dinosaur” … the energy crisis, in a nutshell, is that (a) excessive fossil fuel consumption causes irreparable harm to the environment, and (b) our thirst for fossil fuel is unsustainable, our demand exceeds the supply and the supply’s ability to replenish, so we will eventually run out.  Both issues suggest finding ways to lower energy consumption and implement alternative, sustainable sources of energy.

Experts suggest innovation for these ends is easier than deployment of solutions.  In other words, we may be capable of fixing these problems, but, as a planet, we just don’t want it badly enough yet, notwithstanding some regulatory attempts to limit consumption or incentivize sustainability.  If the irreparable harm reaches a sufficiently catastrophic level, or if the well finally runs dry, it will require – not merely suggest – a global reorganization via energy use and consumption.

The energy void created by removing Russian supply from the global economy may sufficiently mimic the well running dry.  The well may not really be dry, but it would feel like it.  This could provide sufficient incentive to implement that global energy reset, viz., planet-wide lifestyle changes for existing without fossil fuel reliance, for which conservationists have been begging for decades.

The invasion moves the clock forward on the (hopefully) inevitable deployment of green innovation that would naturally occur as soon as we can’t use fossil fuels even if we still want to.

 

Relationship # 2: The Invasion and Crypto   

Crypto was surprisingly not useful for avoiding economic sanctions, although it was designed to resist government regulation and control (for better or for worse).  Blockchain-based crypto transactions are supposedly “peer-to-peer,” requiring no government or private intermediaries.  Other blockchain features include a permanent record of transactions and the possibility of pseudonymity.  Once assets are in crypto form, they are safer than traditional currency – users can generally transfer them to each other, even internationally, without possibility of seizure, theft, taxation, or regulation.

(The New York Times’ Latecomer’s Guide to Crypto and the “Learn” tab on Coinbase.com are great resources for quickly building a basic understanding of this increasingly pervasive technology.)

However, crypto is weak where the blockchain realm meets the physical realm.  While the blockchain itself is safe and secure from theft, a user’s “key” may be lost or stolen from her possession.  Peer-to-peer transactions themselves lack intermediaries, but hosts are required for users to access and use blockchain technology.  Crypto itself is not taxed or regulated, but exchanging digital assets – e.g., buying bitcoin with US dollars – are taxed as a property acquisition and regulated by the Security Exchange Commission (SEC).  Smart contract agreements flounder where real-world verification, adjudication, or common-sense is needed.

This is bad news for sanctioned Russian oligarchs because they cannot get assets “into” or “out of” crypto without consequence.  It is better news for Ukraine, where the borderless-ness and “trust” of crypto transaction eases international transmittal of relief assets and ensures legitimate receipt.

The prospect of crypto being used to circumvent U.S. sanctions brought crypto into the federal spotlight as a matter of national security.  President Biden’s Executive Order (EO) 14067 of March 9, 2022 offers an important turning point for blockchain: when the US government began to direct innovation and government control.  Previously, discussions of whether recognition and control of crypto would threaten innovation, or a failure to do so would weaken government influence, had become a stalemate in regulatory discussion. The EO seems to have taken advantage of the Ukraine invasion to side-step the stagnant congressional debates.

Many had recognized crypto’s potential, but most seemed to wait out the unregulated and mystical prospect of decentralized finance until it became less risky.  Crypto is the modern equivalent of private-issued currencies, which were common during the Free Banking Era, before national banks were established at the end of the Civil War.  They were notoriously unreliable.  Only the SEC had been giving crypto plenty of attention, until (and especially) more recently, when the general public noticed how profitable bitcoin became despite its volatility.

EO 14067’s policy reasoning includes crypto user protection, stability of the financial system, national security (e.g., Russia’s potential for skirting sanctions), preventing crime enablement (viz., modern equivalents to The Silk Road dark web), global competition, and, generally, federal recognition and support for blockchain innovation.  The president asked for research of blockchain technology from departments of Treasury, Defense, Commerce, Labor, Energy, Homeland Security, the Consumer Financial Protection Bureau (CFPB), Federal Trade Commission (FTC), SEC, Commodity Futures Trading Commission (CFTC), Environmental Protection Agency (EPA), and a handful of other federal agencies.

While promoting security and a general understanding of blockchain’s potential uses and feasibility, the order also proposes Central Bank Digital Currencies (CBDC).  CBDCs are FedCoins – a stablecoin issued by the government instead of by private entities.  Stablecoins (e.g., Tether) are a type of crypto whose value is backed by the US Dollar, whereas privately issued crypto (e.g., Bitcoin, Ether) are more volatile because their value is backed by practically nothing.  So, unlike Tether, a privately issued stablecoin, CBDCs would be crypto issued and controlled by the U.S. Treasury.

Imagine CBDCs as a dollar bill made of blockchain technology instead of paper.  A future “cash transaction” could feel more like using Venmo, but without the intermediary host, Venmo.

 

Relationship # 3: Crypto and Energy

Without getting into too many more details, blockchain technology, on which crypto is based, requires an enormous amount of energy-consuming computing power.

Blockchain is a decentralized “distributed ledger technology.” The permanent recordings of transactions are stored and verifiable at every “node” – the computer in front of you could be a node – instead of in a centralized database.  In contrast, the post you are now reading is not decentralized; it is “located” in a UMN database somewhere, not in your computer’s hard drive.  Even a shared Google Doc is in a Google database, not in each of the contributor’s computers.  In a distributed system, if one node changes its version of the distributed ledger, some of the other nodes verify the change.  If the change represents a valid transaction, the change is applied to all versions at each node, if not, the change is rejected, and the ledger remains intact.

These repeated verifications give blockchain its core features, but also require a significant amount of energy.

For most of the history of computers, computing innovation has focused primarily on function, especially increased speed.  Computer processing power eventually became sufficiently fast that, in the last twenty-ish years, computing innovation began to focus on achieving the same speed using less energy and/or with more affordability.  Automotive innovation experienced a similar shift on a different timeline.

Blockchain will likely undergo the same evolution.  First, innovators will focus on function and standardization.  Despite the popularity, this technology still lacks in these areas.  Crypto assets have sometimes disappeared into thin air due to faulty coding or have been siphoned off by anonymous users who found loopholes in the software.  Others, who became interested in crypto during November 2021, after hearing that Ether had increased in value by 989% that year and the crypto market was then worth over $3 trillion, may have been surprised when the value nearly halved by February.

Second, and it if it is a profitable investment – or incentivized by future regulations resulting from EO14067 – innovators will focus on reducing the processing power required for maintaining a distributed ledger.

 

Decentralization, and Other Fanciful Policies

Decentralization and green tech share the same fundamental problem.  The ideas are compelling and revolutionary.  However, their underlying philosophy does not yet match our underlying policy.  In some ways, they are still too revolutionary because, in this author’s opinion, they will require either a complete change in infrastructure or significantly more creativity to be effective.  Neither of these requirements are possible without sufficient policy incentive.  Without the incentive, the ideas are innovative, but not yet truly disruptive.

Using Coinbase on an iPhone to execute a crypto transaction is to “decentralization” what driving a Tesla running on coal-sourced electricity is to “environmentalism.”  They are merely trendy and well-intentioned.  Tesla solves one problem – automotive transportation without gasoline – while creating another – a corresponding demand for electricity – because it relies on existing infrastructure.  Similarly, crypto cannot survive without centralization.  Nor should it, according to the SEC, who has been fighting to regulate privately issued crypto for years.

At first glance, EO 14067 seems to be the nail in the coffin for decentralization.  Proponents designed crypto after the 2008 housing market crash specifically hoping to avoid federal involvement in transactions.  Purists, especially during The Digital Revolution in the 90s, hoped peer-to-peer technology like blockchain (although it did not exist at that time) would eventually replace government institutions entirely – summarized in the term, “code is law.”  This has marked the tension between crypto innovators and regulators, each finding the other uncooperative with its goals.

However, some, such as Kevin Werbach, a prominent blockchain scholar, suggest that peer-to-peer technology and traditional legal institutions need not be mutually exclusive.  Each offers unique elements of “trust,” and each has its weaknesses.  Naturally, the cooperation of novel technologies and existing legal and financial structures can mean mutual benefit.  The SEC seems to share a similarly cooperative perspective, but distinguished, importantly, by the expectation that crypto will succumb to the existing financial infrastructure.  Werbach praises EO 14067, Biden’s request that the “alphabet soup” of federal agencies investigate, regulate, and implement blockchain, as the awaited opportunity for government and innovation to join forces.

The EPA is one of the agencies engaged by the EO.  Pushing for more energy efficient methods of implementing blockchain technology will be as essential as the other stated policies of national security, global competition, and user friendliness.  If the well runs dry, as discussed above, blockchain use will stall, as long as blockchain requires huge amounts of energy.  Alternatively, if energy efficiency can be attained preemptively, the result of ongoing blockchain innovation could play a unique role in addressing climate change and other political issues, viz., decentralization.

In her book, Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing, Beth Simone Noveck suggests an innovative philosophy for future democracies could use peer-to-peer technology to gather wide-spread public expertise for addressing complex issues.  We have outgrown the use of “government bureaucracies that are supposed to solve critical problems on their own”; by analogy, we are only using part of our available brainpower.  More recently, Decentralization: Technology’s Impact on Organizational and Societal Structure, by local scholars Wulf Kaal and Craig Calcaterra, further suggests ways of deploying decentralization concepts.

Decentralized autonomous organizations (“DAOs”) are created with use of smart contracts, a blockchain-based technology, to implement more effectively democratic means of consensus and information sharing.  However, DAOs are still precarious.  Many of these have failed because of exploitation, hacks, fraud, sporadic participation, and, most importantly, lack of central leadership.  Remember, central leadership is exactly what DAOs and other decentralized proposals seek to avoid.  Ironically, in existing DAOs, without regulatory leadership, small, centralized groups of insiders tend to hold all the cards.

Some claim that federal regulation of DAOs could provide transparency and disclosure standards, authentication and background checks, and other means of structural support.  The SEC blocked American CryptoFed, the first “legally sanctioned” DAO, in the state of Wyoming.  Following the recent EO, the SEC’s position may shift.

 

Mutual Opportunity

To summarize:  The invasion of Ukraine may provide the necessary incentive for actuating decentralized or environmentalist ideologies.  EO 14067 initiates federal regulatory structure for crypto and researching blockchain implementation in the U.S.  The result could facilitate eventual decentralized and energy-conscious systems which, in turn, could facilitate resolutions to grave impending climate change troubles.  Furthermore, a new tool for gathering public consensus and expertise could shed new light on other political issues, foreign and domestic.

This sounds suspiciously like, “idea/product X will end climate change, all political disagreements, (solve world hunger?) and create global utopia,” and we all know better than to trust such assertions.

It does sound like it, but Noveck and Kaal & Calcaterra both say no, decentralization will not solve all our problems, nor does it seek to.  Instead, decentralization offers to make us, as a coordinated society, significantly more efficient problem solvers.  A decentralized organizational structure hopes to allow humans to react and adapt to situations more naturally, the way other living organisms adapt to changing environments.  We will always have problems.  Centralization, proponents argue, is no longer the best means of obtaining solutions.

In other words, one hopes that addressing critical issues in the future – like potential military conflict, economic concerns, and global warming – will not be exasperated or limited by the very structures with which we seek to devise and implement a resolution.


I Think, Therefore I Am: The Battle for Intellectual Property Rights with Artificial Intelligence

Sara Pistilli, MJLST Staffer

Artificial intelligence (AI) is a computer or robot that is able to perform tasks that are usually done by humans because they require human judgement and intellect. Some AI can be self-learning, allowing them to learn and progress beyond their initial programming. This creates an issue of inventorship when AI creates patentable subject matter without any contribution from the original inventor of the AI system. This technological advancement has posed the larger question of whether AI qualifies as an “individual” under the United States Patent Act and whether people who create AI machines are able to claim the patent rights when the AI has created the patentable subject matter.

Artificial Intelligence “Inventors”

Patent law is continuously changing as technology expands and advances. While the law has advanced to accommodate innovative technology in the past, the introduction of AI has not been fully articulated. The United States Patent and Trademark Office (USPTO) opened up for comment on patenting AI inventions in 2019, however, it does not appear they asked for any further purpose other than to gather information from the public. The USPTO again asked for comment about patent eligibility jurisprudence as it related to specific technological areas, including AI in 2021. They gathered this information as a “study” and did not pursue any official action. The first official push to recognize AI as an inventor was by Dr. Stephen Thaler. Thaler built an AI machine called “DABUS,” and sought patent rights for the machine’s inventions. Thaler did not argue for DABUS to be the patent right holder, but rather the machine to be named the inventor with Thaler as the patent owner. Thaler’s insistence to name DABUS as the inventor complies with USPTO’s rulesregarding an inventor’s oath or declaration that accompanies a patent application.

United States’ Rulings

Thaler applied for patent rights over a food container and devices and methods for attracting enhanced attention. Both of these products were invented by his AI machine, DABUS. After applying for a U.S. patent, the USPTO rejected his application stating that U.S. law does not allow for artificial intelligence to be listed as an inventor on a patent application or patent. USPTO cited the Patent Act, stating an inventor must be a person, not a machine. USPTO stated that to allow “inventor” to include machines was too broad. Thaler requested reconsideration from the USPTO which was later denied. In 2021, Thaler appealed his rejection in the Eastern District of Virginia. Thaler failed to obtain patent rights with Judge Brinkema ruling only a human can be an inventor. Judge Brinkema relied heavily on statutory interpretation of the word “individual” which was performed by the Supreme Court in a 2012 case on the Torture Victim Protection Act. The Supreme Court had concluded that an “individual” referred to a “natural person.” Judge Brinkema further stated, that it will be up to Congress’ discretion on how they would like to alter patent law to accommodate for AI in the future. Thaler now has a pending appeal to the Court of Appeals.

International Rulings

While countries’ patent systems are independent of one another, they can be influenced based on technological and regulatory advancement happening in another country. Thaler has sought patent rights for DABUS’ two inventions discussed above in several countries including, but not limited to, the United Kingdom, Australia, and South Africa. Thaler obtained patent rights in South Africa, constituting a first in intellectual property history. Of note, however, is that South Africa’s patent system does not have a substantive patent examination system like other countries, nor do their patent laws define “inventor.” Thaler received a more persuasive ruling in Australia that may be able to effectuate change in other countries.  In 2021, Thaler’s patent application was denied in Australia. The Australian Patent Office (APO) stated that the language of the Patents Act was inconsistent with AI being treated as an inventor. Thaler appealed this decision to the Federal Court of Australia. Justice Beach ordered that this case must be remitted based on his ruling that AI can be a recognized inventor under the Australian Patents Act. Judge Beach further stated that AI cannot, however, be an applicant for a patent or an owner of a patent. It is with these reasons that Judge Beach requested reconsideration and remitted this case back to the Deputy Commissioner of the APO. The APO is now appealing this decision. Similar to the APO, the United Kingdom Intellectual Property Office (UKIPO) also pushed back against Thaler’s application for patent rights. In 2019, the UKIPO rejected Thaler’s application stating that the listing of DABUS as an inventor did not meet the requirements of the United Kingdom’s Patent Act. They stated a person must be identified as the inventor. Thaler appealed this rejection and was again denied by the UKIPO, who stated that a machine as an inventor does not allow for the innovation desired by patent rights. Thaler appealed again, to the England and Wales Patents Court, and was again denied patent rights. The judge stated that Thaler was using the Patent Act text out of context for his argument, ruling that the Patent Act cannot be construed to allow non-human inventors. In 2021, Thaler appealed this decision in the England and Wales Court of Appeals. He was again denied patent rights with all three judges agreeing that a patent is a right that can only be granted to a person and, that an inventor must be a person.

Future Prospects

Thaler currently has pending applications in several countries including Brazil, Canada, China, and Japan. The outcome of the appeal against the Federal Court of Australia’s decision on whether AI can be an inventor may prove crucial in helping to amend U.S. patent laws. Similarly, if more countries, in addition to South Africa, outright grant Thaler his patent rights, the U.S. may be forced to re-think their policies on AI-invented patentable subject matter.