Artificial Intelligence

Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Only Humans Are Allowed: Federal Circuit Says No to “AI Inventors”

Vivian Lin, MJLST Staffer

On August 5, 2022, the U.S. Court of Appeals for the Federal Circuit affirmed the U.S. District for the Eastern Division of Virginia’s decision that artificial intelligence (AI) cannot be an “inventor” on a patent application,[1] joining many other jurisdictions in confirming that only a natural person can be an “inventor”.[2] Currently, South Africa remains the only jurisdiction that has granted Dr. Stephan Thaler’s patent naming DABUS, an AI, as the sole inventor of two patentable inventions.[3] With the release of the Federal Circuit’s opinion refusing to recognize AI as an inventor, Dr. Thaler’s fight to credit AI for inventions reaches a plateau. 

DABUS, formally known as Device for the Autonomous Bootstrapping of Unified Sentience, is an AI-based creativity machine created by Dr. Stephan Thaler, the founder of the software company Imagination Engine Inc. Dr. Thaler claimed that DABUS independently invented two patentable inventions: The Factual Container and the Neural Flame. For the past few years, Dr. Thaler has been in battle with patent offices around the world trying to receive patents for these two inventions. Until this date, every patent office, except one,[4] has refused to grant the patents on the grounds that the applications do not name a natural person as the inventor. 

The inventor of a patent being a natural person is a legal requirement in many jurisdictions. The recent Federal Circuit opinion ruled mainly based on statutory interpretation, arguing that the text is clear in requiring a natural person to be the inventor.[5] Though there are many jurisdictions that have left the term “inventor” undefined, it seems to be a general agreement that an inventor should be a natural person.[6]

Is DABUS the True Inventor?

There are many issues centered around AI inventorship. The first is whether AI can be the true inventor, and subsequently take credit for an invention, even though a human created the AI itself. Here it becomes necessary to inquire into whether there was human intervention during the discovery process, and if so, what type of intervention was involved. It might be the case that a natural human was the actual inventor of a product while AI only assisted in carrying out that idea. For example, when a developer designed the AI with a particular question in mind and carefully selected the training data, the AI is only assisting the invention while the developer is seen as the true inventor.[7] In analyzing the DABUS case, Dr. Rita Matulionyte, a senior lecturer at Macquarie Law School in Australia and an expert in intellectual property and information technology law, has argued that DABUS is not the true inventor because Dr. Thaler’s role in the inventions was unquestionable, assuming he formulated the problem, developed the algorithm, created the training date, etc.[8] 

However, it is a closer question when both AI and human effort are important for the invention. For example, AI might identify the compound for a new drug, but to conclude the discovery, a scientist still has to test the compound.[9] The U.S. patent law requires that the “inventor must contribute to the conception of the invention.”[10] Further defined, conception is “the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”[11] In the drug discovery scenario, it is difficult to determine who invented the new drug. Neither the AI developers nor the scientists fit the definition of “inventor”: The AI developers and trainers only built and trained the algorithm without any knowledge of the potential discovery while the scientists only confirmed the final discovery without contributing to the development of the algorithm or the discovery of the drug.[12] In this scenario, it is likely the AI did the majority of the work and made the important discovery itself, and should thus be the inventor of the new compound.[13]

The debate on who is the true inventor is important because mislabeling the inventor can cause serious consequences. Legally, improper inventorship attribution may cause a patent application to be denied, or it may lead to the later invalidation of a granted patent. Practically speaking, human inventors are able to take credit for their invention and that honor comes with recognition which may incentive future creative inventions. Thus, a misattribution may harm human inventiveness as true inventors could be discouraged by not being recognized for their contributions. 

Should AI-Generated Inventions be Patentable?

While concluding that AI is the sole inventor of an invention may be difficult as outlined in the previous section, what happens when AI is found to be the true, sole inventor? Society’s discussion on whether AI inventions should be patented focuses mostly on policy arguments. Dr. Thaler and Ryan Abbott, a law professor and the lead of Thaler’s legal team, have argued that allowing patent protection for AI-generated inventions will encourage developers to invest time in building more creative machines that will eventually lead to more inventions in the future.[14] They also argued that crediting AI for inventorship will protect the rights of human inventors.[15] For example, it cuts out the possibility of one person taking credit for another’s invention, which often happens when students participate in university research but are overlooked on patent applications.[16] Without patent applicability, the patent system’s required disclosure of inventions, it is very likely that owners of AI will keep inventions secret and privately benefit from the monopoly for however long it takes the rest of society to figure it out independently.[17] 

Some critics argue against Thaler and Abbott’s view. For one, they believe that AI at its current stage is not autonomous enough to be an inventor and human effort should be properly credited.[18] Even if AI can independently invent, its inventions should not be patentable because once it is, there will be too many patented inventions by AI in the same field owned by the same group of people who have access to these machines.[19] That will prevent smaller companies from entering into this field, having a negative effect on human inventiveness.[20]  Finally, there has been a concern that not granting patents to AI-invented creations will let AI owners keep the inventions as trade secrets, leading to a potential long-term monopoly. However, that might not be a big concern as inventions like the two created by DABUS are likely to be easily reverse engineered once they reach the market.[21]

Currently, Dr. Thaler plans to file appeals in each jurisdiction that has rejected his application and aims to seek copyright protection as an alternative in the U.S. It is questionable that Dr. Thaler will succeed on those appeals, but if he ever does, it will likely result in major changes to patent systems around the world. Even if most jurisdictions today forbid AI from being classified as an inventor, with the advancement of technology the need to address this issue will become more and more pressing as time goes on. 

Notes

[1] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).

[2] Ryan Abbott, July 2022 AIP Update Around the World, The Artificial Inventor Project (July 10, 2022), https://artificialinventor.com/867-2/.

[3] Id.

[4] South Africa’s patent law does not have a requirement on inventors being a natural person. Jordana Goodman, Homography of Inventorship: DABUS And Valuing Inventors, 20 Duke L. & Tech. Rev. 1, 17 (2022).

[5] Thaler, 43 F.4th at 1209, 1213.

[6] Goodman, supra note 4, at 10.

[7] Ryan Abbott, The Artificial Inventor Project, WIPO Magazine (Dec. 2019), https://www.wipo.int/wipo_magazine/en/2019/06/article_0002.html.

[8] Rita Matulionyte, AI as an Inventor: Has the Federal Court of Australia Erred in DABUS? 12 (2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3974219.

[9] Susan Krumplitsch et al. Can An AI System Be Named the Inventor? In Wake Of EDVA Decision, Questions Remain, DLA Piper (Sept. 13, 2019), https://www.dlapiper.com/en/us/insights/publications/2021/09/can-an-ai-system-be-named-the-inventor/#11

[10] 2109 Inventorship, USPTO, https://www.uspto.gov/web/offices/pac/mpep/s2109.html (last visited Oct. 8, 2022).

[11] Hybritech, Inc. v. Monoclonal Antibodies, Inc., 802 F.2d 1367, 1376 (Fed. Cir. 1986).

[12] Krumplitsch et al., supra note 9.

[13] Yosuke Watanabe, I, Inventor: Patent Inventorship for Artificial Intelligence Systems, 57 Idaho L. Rev. 473, 290.

[14] Abbott, supra note 2.

[15] Id.

[16] Goodman, supra note 4, at 21.

[17] Abbott, supra note 2.

[18] Matulionyte, supra note 8, at 10–14.

[19] Id. at 19.

[20] Id.

[21] Id. at 18.




“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Breaking the Tech Chain to Slow the Growth of Single-Family Rentals

Sarah Bauer, MJLST Staffer

For many of us looking to buy our first homes during the pandemic, the process has ranged from downright comical to disheartening. Here in Minnesota, the Twin Cities have the worst housing shortage in the nation, a problem that has both Republican and Democratic lawmakers searching for solutions to help both renters and buyers access affordable housing. People of color are particularly impacted by this shortage because the Twin Cities are also home to the largest racial homeownership gap in the nation

Although these issues have complex roots, tech companies and investors aren’t helping. The number of single-family rentals (SFR) units — single-family homes purchased by investors and rented out for profit — have risen since the great Recession and exploded over the course of the pandemic. In the Twin Cities, black neighborhoods have been particularly targeted by investors for this purpose. In 2021, 8% of the homes sold in the Twin Cities metro were purchased by investors, but investors purchased homes in BIPOC-majority zip codes at nearly double the rate of white-majority neighborhoods. Because property ownership is a vehicle for wealth-building, removing housing stock from the available pool essentially transfers the opportunity to build wealth from individual homeowners to investors who can both profit from rents as well as the increased value of the property at sale. 

It’s not illegal for tech companies and investors to purchase and rent out single-family homes. In certain circumstances, it may actually be desirable for them to be involved in the market. If you are a seller that needs to sell your home before buying a new one, house-flipping tech companies can get you out of your home faster by purchasing the home without a showing, an inspection, or contingencies. And investors purchasing single-family homes can provide a floor to the market during slowdowns like the Great Recession, a service which benefits homeowners as well as the investors themselves. But right now we have the opposite problem: not enough homes available for first-time owner-occupants. Assuming investor-ownership is becoming increasingly undesirable, what can we do about it? To address the problem, we need to understand how technology and investors are working in tandem to increase the number of single-family rentals.

 

The Role of House-Flipping Technology and iBuyers

The increase in SFRs is fueled by investors of all kinds: corporations, local companies, and wealthy individuals. For smaller players, recent developments in tech have made it easier for them to flip their properties. For example, a recent CityLab article discussed FlipOS, “a platform that helps investors prioritize repairs, access low-interest loans, and speed the selling process.” Real estate is a decentralized industry, and such platforms make the process of buying single-family homes and renting them out faster. Investors see this as a benefit to the community because rental units come onto the market faster than they otherwise would. But this technology also gives such investors a competitive advantage over would-be owner-occupiers.

The explosion of iBuying during the pandemic also hasn’t helped. iBuyers — short for “instant buyers” — use AI to generate automated valuation models to give the seller an all-cash, no contingency offer. This enables the seller to offload their property quickly, while the iBuyer repairs, markets, and re-sells the home. iBuyers are not the long-term investors that own SFRs, but the house-flippers that facilitate the transfer of property between long-term owners.

iBuyers like Redfin, Offerpad, Opendoor (and formerly Zillow) have increasingly purchased properties in this way over the course of the pandemic. This is true particularly in Sunbelt states, which have a lot of new construction of single-family homes that are easier to accurately price. As was apparent from the demise of Zillow’s iBuying program, these companies have struggled with profitability because home values can be difficult to predict. The aspects of real estate transactions that slow down traditional homebuyers (title check, inspections, etc…) also slow down iBuyers. So they can buy houses fast by offering all-cash offers with no inspection, but they can’t really offload them faster than another seller.

To the degree that iBuyers in the market are a problem, that problem is two-fold. First, they make it harder for first-time homeowners to purchase homes by offering cash and waiving inspections, something few first-time homebuyers can afford to offer. The second problem is a bigger one: iBuyers are buying and selling a lot of starter homes to large, non-local investors rather than back to owner-occupants or local landlords.

 

Transfer from Flippers to Corporate Investors

iBuyers as a group sell a lot of homes to corporate landlords, but it varies by company. After Zillow discontinued its iBuying program, Bloomberg reported that the company planned to offload 7,000 homes to real estate investment trusts (REITs). Offerpad sells 10-20% of its properties to institutional investors. Opendoor claims that it sells “the vast majority” of its properties to owner-occupiers. RedfinNow doesn’t sell to REITs at all. Despite the variation between companies, iBuyers on the whole sold one-fifth of their flips to institutional investors in 2021, with those sales more highly concentrated in neighborhoods of color. 

REITs allow firms to pool funds, buy bundles of properties, and convert them to SFRs. In addition to shrinking the pool of homes available for would-be owner-occupiers, REITs hire or own corporate entities to manage the properties. Management companies for REITs have increasingly come under fire for poor management, aggressively raising rent, and evictions. This is as true in the Twin Cities as elsewhere. Local and state governments do not always appear to be on the same page regarding enforcement of consumer and tenant protection laws. For example, while the Minnesota AG’s office filed a lawsuit against HavenBrook Homes, the city of Columbia Heights renewed rental occupancy licenses for the company. 

 

Discouraging iBuyers and REITs

If we agree as a policy matter that single-family homes should be owner-occupied, what are some ways to slowdown the transfer of properties and give traditional owner-occupants a fighting chance? The most obvious place to start is by considering a ban on iBuyers and investment firms from acquiring homes. The Los Angeles city council voted late last year to explore such a ban. Canada has voted to ban most foreigners from buying homes for two years to temper its hot real estate market, a move which will affect iBuyers and investors.

  Another option is to make flipping single-family homes less attractive for iBuyers. A state lawmaker from San Diego recently proposed Assembly Bill 1771, which would impose an additional 25% tax on the gain from a sale occurring within three years of a previous sale. This is a spin on the housing affordability wing of Bernie Sanders’s 2020 presidential campaign, which would have placed a 25% house-flipping tax on sellers of non-owner-occupied property, and a 2% empty homes tax on property of vacant, owned homes. But If iBuyers arguably provide a valuable service to sellers, then it may not make sense to attack iBuyers across the board. Instead, it may make more sense to limit or heavily tax sales from iBuyers to investment firms, or the opposite, reward iBuyers with a tax break for reselling homes to owner-occupants rather than to investment firms.

It is also possible to make investment in single-family homes less attractive to REITs. In addition to banning sales to foreign investors, the Liberal Party of Canada pitched an “excessive rent surplus” tax on post-renovation rent surges imposed by landlords. In addition to taxes, heavier regulation might be in order. Management companies for REITs can be regulated more heavily by local governments if the government can show a compelling interest reasonably related to accomplishing its housing goals. Whether REIT management companies are worse landlords than mom-and-pop operations is debatable, but the scale at which REITs operate should on its own make local governments think twice about whether it is a good idea to allow so much property to transfer to investors. 

Governments, neighborhood associations, and advocacy groups can also engage in homeowner education regarding the downsides of selling to an iBuyer or investor. Many sellers are hamstrung by needing to sell quickly or to the highest bidder, but others may have more options. Sellers know who they are selling their homes to, but they have no control over to whom that buyer ultimately resells. If they know that an iBuyer is likely to resell to an investor, or that an investor is going to turn their home into a rental property, they may elect not to sell their home to the iBuyer or investor. Education could go a long way for these homeowners. 

Lastly, governments themselves could do more. If they have the resources, they could create a variation on Edina’s Housing Preservation program, where homeowners sell their house to the City to preserve it as an affordable starter home. In a tech-oriented spin of that program, the local government could purchase the house to make sure it ends up in the hands of another owner-occupant, rather than an investor. Governments could decline to sell to iBuyers or investors single-family homes seized through tax forfeitures. Governments can also encourage more home-building by loosening zoning restrictions. More homes means a less competitive housing market, which REIT defenders say will make the single-family market less of an attractive investment vehicle. Given the competitive advantage of such entities, it seems unlikely that first-time homebuyers could be on equal footing with investors absent such disincentives.


I Think, Therefore I Am: The Battle for Intellectual Property Rights With Artificial Intelligence

Sara Pistilli, MJLST Staffer

Artificial intelligence (AI) is a computer or robot that is able to perform tasks that are usually done by humans because they require human judgement and intellect. Some AI can be self-learning, allowing them to learn and progress beyond their initial programming. This creates an issue of inventorship when AI creates patentable subject matter without any contribution from the original inventor of the AI system. This technological advancement has posed the larger question of whether AI qualifies as an “individual” under the United States Patent Act and whether people who create AI machines are able to claim the patent rights when the AI has created the patentable subject matter.

Artificial Intelligence “Inventors”

Patent law is continuously changing as technology expands and advances. While the law has advanced to accommodate innovative technology in the past, the introduction of AI has not been fully articulated. The United States Patent and Trademark Office (USPTO) opened up for comment on patenting AI inventions in 2019, however, it does not appear they asked for any further purpose other than to gather information from the public. The USPTO again asked for comment about patent eligibility jurisprudence as it related to specific technological areas, including AI in 2021. They gathered this information as a “study” and did not pursue any official action. The first official push to recognize AI as an inventor was by Dr. Stephen Thaler. Thaler built an AI machine called “DABUS,” and sought patent rights for the machine’s inventions. Thaler did not argue for DABUS to be the patent right holder, but rather the machine to be named the inventor with Thaler as the patent owner. Thaler’s insistence to name DABUS as the inventor complies with USPTO’s rulesregarding an inventor’s oath or declaration that accompanies a patent application.

United States’ Rulings

Thaler applied for patent rights over a food container and devices and methods for attracting enhanced attention. Both of these products were invented by his AI machine, DABUS. After applying for a U.S. patent, the USPTO rejected his application stating that U.S. law does not allow for artificial intelligence to be listed as an inventor on a patent application or patent. USPTO cited the Patent Act, stating an inventor must be a person, not a machine. USPTO stated that to allow “inventor” to include machines was too broad. Thaler requested reconsideration from the USPTO which was later denied. In 2021, Thaler appealed his rejection in the Eastern District of Virginia. Thaler failed to obtain patent rights with Judge Brinkema ruling only a human can be an inventor. Judge Brinkema relied heavily on statutory interpretation of the word “individual” which was performed by the Supreme Court in a 2012 case on the Torture Victim Protection Act. The Supreme Court had concluded that an “individual” referred to a “natural person.” Judge Brinkema further stated, that it will be up to Congress’ discretion on how they would like to alter patent law to accommodate for AI in the future. Thaler now has a pending appeal to the Court of Appeals.

International Rulings

While countries’ patent systems are independent of one another, they can be influenced based on technological and regulatory advancement happening in another country. Thaler has sought patent rights for DABUS’ two inventions discussed above in several countries including, but not limited to, the United Kingdom, Australia, and South Africa. Thaler obtained patent rights in South Africa, constituting a first in intellectual property history. Of note, however, is that South Africa’s patent system does not have a substantive patent examination system like other countries, nor do their patent laws define “inventor.” Thaler received a more persuasive ruling in Australia that may be able to effectuate change in other countries.  In 2021, Thaler’s patent application was denied in Australia. The Australian Patent Office (APO) stated that the language of the Patents Act was inconsistent with AI being treated as an inventor. Thaler appealed this decision to the Federal Court of Australia. Justice Beach ordered that this case must be remitted based on his ruling that AI can be a recognized inventor under the Australian Patents Act. Judge Beach further stated that AI cannot, however, be an applicant for a patent or an owner of a patent. It is with these reasons that Judge Beach requested reconsideration and remitted this case back to the Deputy Commissioner of the APO. The APO is now appealing this decision. Similar to the APO, the United Kingdom Intellectual Property Office (UKIPO) also pushed back against Thaler’s application for patent rights. In 2019, the UKIPO rejected Thaler’s application stating that the listing of DABUS as an inventor did not meet the requirements of the United Kingdom’s Patent Act. They stated a person must be identified as the inventor. Thaler appealed this rejection and was again denied by the UKIPO, who stated that a machine as an inventor does not allow for the innovation desired by patent rights. Thaler appealed again, to the England and Wales Patents Court, and was again denied patent rights. The judge stated that Thaler was using the Patent Act text out of context for his argument, ruling that the Patent Act cannot be construed to allow non-human inventors. In 2021, Thaler appealed this decision in the England and Wales Court of Appeals. He was again denied patent rights with all three judges agreeing that a patent is a right that can only be granted to a person and, that an inventor must be a person.

Future Prospects

Thaler currently has pending applications in several countries including Brazil, Canada, China, and Japan. The outcome of the appeal against the Federal Court of Australia’s decision on whether AI can be an inventor may prove crucial in helping to amend U.S. patent laws. Similarly, if more countries, in addition to South Africa, outright grant Thaler his patent rights, the U.S. may be forced to re-think their policies on AI-invented patentable subject matter.


TikTok Settles in Class Action Data Privacy Lawsuit – Will Pay $92 Million Settlement

Sarah Nelson, MJLST Staffer

On November 15, 2021, TikTok users received the following notification within the app: “Class Action Settlement Notice: U.S. residents who used Tik Tok before 01 OCT 2021 may be eligible for a class settlement payment – visit https://www.TikTokDataPrivacySettlement.com for details.” The notification was immediately met with skepticism, with users taking to Twitter and TikTok itself to joke about how the notification was likely a scam. However, for those familiar with TikTok’s litigation track record on data privacy, this settlement does not come as a surprise. Specifically, in 2019, TikTok – then known as Musical.ly – settled with the Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act for $5.7 million. This new settlement is notable for the size of the payout and for what it tells us about the current state of data privacy and biometric data law in the United States.

Allegations in the Class Action

21 federal lawsuits against TikTok were consolidated into one class action to be overseen by the United States District Court for the Northern District of Illinois. All of the named plaintiffs in the class action are from either Illinois or California and many are minors. The class action comprises two classes – one class covers TikTok users nationwide and the other only includes Tik Tok users who are residents of Illinois.

In the suit, plaintiffs allege TikTok improperly used their personal data. This improper use includes accusations that TikTok, without consent, shared consumer data with third parties. These third parties allegedly include companies based in China, as well as well-known companies in the United States like Google and Facebook. The class action also accuses TikTok of unlawfully using facial recognition technology and of harvesting data from draft videos – videos that users made but never officially posted. Finally, plaintiffs allege TikTok actively took steps to conceal these practices.

What State and Federal Laws Were Allegedly Violated?

On the federal law level, plaintiffs allege TikTok violated the Computer Fraud and Abuse Act (CFAA) and the Video Privacy Protection Act (VPPA). As the name suggests, the CFAA was enacted to combat computer fraud and prohibits accessing “protected computers” in the absence of authorization or beyond the scope of authorization. Here, the plaintiff-users allege TikTok went beyond the scope of authorization by secretly transmitting personal data, “including User/Device Identifiers, biometric identifiers and information, and Private Videos and Private Video Images never intended for public consumption.” As for the VPPA, the count alleges the Act was violated when TikTok gave “personally identifiable information” to Facebook and Google. TikTok allegedly provided Facebook and Google with information about what videos a TikTok user had watched and liked, and what TikTok content creators a user had followed.

On the state level, the entire class alleged violations of the California Comprehensive Data Access and Fraud Act and a Violation of the Right to Privacy under the California Constitution. Interestingly, the plaintiffs within the Illinois subclasswere able to allege violations under the Biometric Information Privacy Act (BIPA). Under the BIPA, before collecting user biometric information, companies must inform the consumer in writing that the information is being collected and why. The company must also say how long the information will be stored and get the consumer to sign off on the collection. The complaint alleges TikTok did not provide the required notice or receive the required written consent.

Additionally, plaintiffs allege intrusion upon seclusion, unjust enrichment, and violation of both a California unfair competition law and a California false advertising law.

In settling the class action, TikTok denies any wrongdoing and maintains that this settlement is only to avoid the cost of further litigation. TikTok gave the following statement to the outlet Insider: “While we disagree with the assertions, we are pleased to have reached a settlement agreement that allows us to move forward and continue building a safe and joyful experience for the TikTok community.”

Terms of the Settlement

To be eligible for a settlement payment, a TikTok user must be a United States resident and must have used the app prior to October of 2021. If an individual meets these criteria, they must submit a claim before March 1, 2022. 89 million usersare estimated to be eligible to receive payment. However, members of the Illinois subclass are eligible to receive six shares of the settlement, as compared to the one share the nationwide class is eligible for. This difference is due to the added protection the Illinois subclass has from BIPA.

In addition to the payout, the settlement will require TikTok to revise its practices. Under the agreed upon settlement reforms, TikTok will no longer mine data from draft videos, collect user biometric data unless specified in the user agreement, or use GPS data to track user location unless specified in the user agreement. TikTok also said they would no longer send or store user data outside of the United States.

All of the above settlement terms are subject to final approval by the U.S. District Judge.

Conclusion

The lawyers representing TikTok users remarked that this settlement was “among the largest privacy-related payouts in history.” And, as noted by NPR, this settlement is similar to the one agreed to by Facebook in 2020 for $650 million. It is possible the size of these settlements will contribute to technology companies preemptively searching out and ceasing practices that may be privacy violative

It is also worth noting the added protection extended to residents of Illinois because of BIPA and its private right of actionthat can be utilized even where there has not been a data breach.

Users of the TikTok app often muse about how amazingly curated their “For You Page” – the videos that appear when you open the app and scroll without doing any particular search – seem to be. For this reason, even with potential privacy concerns, the app is hard to give up. Hopefully, users can rest a bit easier now knowing TikTok has agreed to the settlement reforms.


The StingRay You’ve Never Heard Of: How One of the Most Effective Tools in Law Enforcement Operates Behind a Veil of Secrecy

Dan O’Dea, MJLST Staffer

One of the most effective investigatory tools in law enforcement has operated behind a veil of secrecy for over 15 years. “StingRay” cell phone tower simulators are used by law enforcement agencies to locate and apprehend violent offenders, track persons of interest, monitor crowds when intelligence suggests threats, and intercept signals that could activate devices. When passively operating, StingRays mimic cell phone towers, forcing all nearby cell phones to connect to them, while extracting data in the form of metadata calls, text messages, internet traffic, and location information, even when a connected phone is powered off. They can also inject spying software into phones and prevent phones from accessing cellular data. StingRays were initially used overseas by federal law enforcement agencies to combat terrorism, before spreading into the hands of the Department of Justice and Department of Homeland Security, and now are actively used by local law enforcement agencies in 27 states to solve everything from missing persons cases to thefts of chicken wings.

The use of StingRay devices is highly controversial due to their intrusive nature. Not only does the use of StingRays raise privacy concerns, but tricking phones into connecting to StingRays mimicking cell phone towers prevent accessing legitimate cell phone service towers, which can obstruct access to 911 and other emergency hotlines. Perplexingly, the use of StingRay technology by law enforcement is almost entirely unregulated. Local law enforcement agencies frequently cite secrecy agreements with the FBI and the need to protect an investigatory tool as a means of denying the public information about how StingRays operate, and criminal defense attorneys have almost no means of challenging their use without this information. While the Department of Justice now requires federal agents obtain a warrant to use StingRay technology in criminal cases, an exception is made for matters relating to national security, and the technology may have been used to spy on racial-justice protestors during the Summer of 2020 under this exception. Local law enforcement agencies are almost completely unrestricted in their use of StingRays, and may even conceal their use in criminal prosecutions by tagging their findings as those of a “confidential source,” rather than admitting the use of a controversial investigatory tool. Doing so allows prosecutors to avoid  battling 4th amendment arguments characterizing data obtained by StingRays as unlawful search and seizure.

After existing in a “legal no-man’s land” since the technology’s inception, Senator Ron Wyden (D-OR) and Representative Ted Lieu (D-HI) sought to put an end to the secrecy of StingRays through introducing the Cell-Site Simulator Warrant Act of 2021 in June of 2021. The bill would have mandated that law enforcement agencies obtain a warrant to investigate criminal activity before deploying StingRay technology while also requiring law enforcement agencies to delete data of phones other than those of investigative targets. Further, the legislation would have required agencies to demonstrate a need to use StingRay technology that outweighs any potential harm to the community impacted by the technology. Finally, the bill would have limited authorized use of StingRay technology to the minimum amount of time necessary to conduct an investigation. However, the Cell-Site Simulator Warrant Act of 2021 appears to have died in committee after failing to garner significant legislative support.

Ultimately, no device with the intrusive capabilities of StingRays should be allowed to operate free from the constraints of regulation. While StingRays are among the most effective tools utilized by law enforcement, they are also among the most intrusive into the privacy of the general public. It logically follows that agencies seeking to operate StingRays should be required to make a showing of a need to utilize such an intrusive investigatory tool. In certain situations, it may be easy to establish the need to deploy a StingRay, such as doing so to further the investigation of a missing persons case. In others, law enforcement agencies would correctly find their hands tied should they wish to utilize a StingRay to catch a chicken wing thief.


Timing Trouble: To What Extent Should We Assume People Will Break the Law?

Jack Brooksbank, MJLST Staffer

City planners and civil engineers across the country face a little-known, yet extremely important, question when designing road systems: how long should the green lights last? Anyone who has ever had a regular commute probably wishes the answer was simply “longer,” but this seemingly minor detail can get quite complex. Traffic light timing decisions are made by both government officials and specialist consulting firms, based on extensive studies, and supported by academic papers. The practice of traffic light timing is so established that it has its own lingo.

Perhaps the most important part of traffic light timing is coordination. Engineers try to set the cycles of lights on a given route in concert, so that a car passing through one green light finds the next light turning green in time for it to continue. “The intent of coordinating traffic signals is to provide smooth flow of traffic along streets and highways in order to reduce travel times, stops and delay.” When done well, it leads to a phenomenon known in the industry as a “green wave,” where a car hits every green light in a row and never needs to come to a stop.

It’s not just a minor detail, either. Coordination can have some serious benefits for a city. One town revamping its timing scheme estimated it would reduce travel times by as much as 10%. And although making the morning commute go more smoothly is a worthy goal in itself, proper light timing can create other benefits too. Efficient traffic light timing can even help the environment: by reducing the number of stops, and the total time spent driving, coordinated traffic signals reduce the amount of fuel burned, and greenhouse gasses produced, by commuters.

However, timing traffic lights relies in large part on one central assumption: that a car leaving one green light takes a certain amount of time to get to the next one. This raises a potential problem: drivers don’t follow the speed limit. Indeed, one study found that nearly 70% of all drivers regularly speed! When timing traffic lights, then, designers must make a choice: do they time the lights based on the legal speed limit, or based on the speed drivers actually go?

If timing is based on the speed limit, many cars will still arrive at the next light before it has turned green. The coordination of signals won’t have mattered, and the cars will still have to come to a stop. By basing the timing on the wrong speed, the designers have negated the benefit of their careful work, and might as well have saved the time and money needed for figuring out how to coordinate the signals in the first place. But, if instead timing is based on the speed drivers really travel, designers are essentially rewarding illegal behavior—and punishing those drivers who do actually follow the law with extra stops and delays!

Most major cities now rely on actuated controllers, or devices that detect when cars are approaching in order to trigger light changes without human input. Some cities are even experimenting with AI-based systems that take the design out of human hands completely. Advances in technology have thus heavily favored the “actual speed” approach, but is this because a decision was made to accommodate speeding drivers? Or have cities, in their enthusiasm to reduce congestion, simply adopted the latest in technology without considering the policy choice that it entails?

Also, if traffic lights should be timed for the actual speed cars travel, it may raise further implications for other areas of law that rely on questionable assumptions of human behavior. Perhaps most notable is the law of contracts, which generally relies heavily on the assumption that people read contracts before signing them. But as electronic devices, apps, and online content proliferate, this assumption gets farther from the truth. And people can hardly be blamed for agreeing without reading: one investigation in Norway found that people have an average of 33 apps on their smartphones, and that reading the terms and conditions of that many apps would take an average of 31 hours. Another investigation found that simply reading all the website privacy policies an average internet user encounters in a year would require 76 eight-hour days of reading! If we should time traffic lights to account for people being too impatient to follow the legal speed limit, surely we should update the laws of contract to account for such a crushing reading load. Perhaps it is time to reform many areas of law, so that they are no longer grounded on unrealistic expectations of human behavior.

 


Mystery Medicine: How AI in Healthcare Is (or Isn’t) Different From Current Medicine

Jack Brooksbank, MJLST Staffer

Artificial Intelligence (AI) is a funny creature. When we say AI, generally we mean algorithms, such as neural networks, that are “trained” based on some initial dataset. This dataset can be essentially anything, such as a library of tagged photographs or the set of rules to a board game. The computer is given a goal, such as “identify objects in the photos” or “win a game of chess.” It then systematically iterates some process, depending on which algorithm is used, and checks the result against the known results from the initial dataset. In the end, the AI finds some pattern— essentially through brute force  —and then uses that pattern to accomplish its task on new, unknown inputs (by playing a new game of chess, for example).

AI is capable of amazing feats. IBM-made Deep Blue famously defeated chess master Gary Kasparov back in 1997, and the technology has only gotten better since. Tesla, Uber, Alphabet, and other giants of the technology world rely on AI to develop self-driving cars. AI is used to pick stocks, to predict risk for investors, spot fraud, and even determine whether to approve a credit card application.

But, because AI doesn’t really know what it is looking at, it can also make some incredible errors. One  neural network AI trained to detect sheep  in photographs instead noticed that sheep tend to congregate in grassy fields. It then applied the “sheep” tag to any photo of such a field, fluffy quadrupeds or no. And when shown a photo of sheep painted orange, it handily labeled them “flowers.” Another cutting-edge AI platform has, thanks to a quirk of the original dataset it was trained on, a known propensity to spot giraffes where none exist. And the internet is full of humorous examples of AI-generated weirdness, like one neural net that invented color names such as  “snowbonk,” “stargoon,” and “testing.”

One area of immense potential for AI applications is healthcare. AIs are being investigated for applications including diagnosing diseases  and aiding in drug discovery. Yet the use of AI raises challenging legal questions. The FDA has been given a statutory mandate to ensure that many healthcare items, such as drugs or medical devices, are safe. But the review mechanisms the agency uses to ensure that drugs or devices are safe generally rely on knowing how the thing under review works. And patients who receive sub-standard care have legal recourse if they can show that they were not treated with the appropriate standard of care.  But AI is helpful essentially because we don’t know how it works—because AI develops its own patterns beyond what humans can spot. The opaque nature of AI could make effective regulatory oversight very challenging. After all, a patient mis-diagnosed by a substandard AI may have no way of proving that the AI was flawed. How could they, when nobody knows how it actually works?

One possible regulatory scheme that could get around this issue is to have AI remain “supervised” by humans. In this model, AI could be used to sift through data and “flag” potential points of interest. A human reviewer would then see what drew the AI’s interest, and make the final decision independently. But while this would retain a higher degree of accountability in the process, it would not really be using the AI to its full potential. After all, part of the appeal of AI is that it could be used to spot things beyond what humans could see. And there would also be the danger that overworked healthcare workers would end up just rubber stamping the computer’s decision, defeating the purpose of having human review.

Another way forward could be foreshadowed by a program the FDA is currently testing for software update approval. Under the pre-cert program, companies could get approval for the procedures they use to make updates. Then, as long as future updates are made using that process, the updates themselves would be subject to a greatly reduced approval burden. For AI, this could mean agencies promulgating standardized methods for creating an AI system—lists of approved algorithm types, systems for choosing the dataset the AI are trained on—and then private actors having to show only that their system has been set up well.

And of course, another option would be to simply accept some added uncertainty. After all, uncertainty abounds in the current healthcare system today, despite our best efforts. For example, Lithium is prescribed to treat bipolar disorder, despite uncertainty in the medical community of how it works. Indeed, the mechanism for many drugs remains mysterious. We know that these drugs work, even if we don’t know how; perhaps using the same standard for AI in medicine wouldn’t really be so different after all.