2023

Call of Regulation: How Microsoft and Regulators Are Battling for the Future of the Gaming Industry

Caroline Moriarty, MJLST Staffer

In January of 2022 Microsoft announced its proposed acquisition of Activision Blizzard, a video game company, promising to “bring the joy and community of gaming to everyone, across every device.” However, regulators in the United States, the EU, and the United Kingdom have recently indicated that they may block this acquisition due to its antitrust implications. In this post I’ll discuss the proposed acquisition, its antitrust concerns, recent actions from regulators, and prospects for the deal’s success.

Background

Microsoft, along with making the Windows platform, Microsoft Office suite, Surface computers, cloud computing software, and of new relevance, Bing, is a major player in the video game space. Microsoft owns Xbox, which along with Nintendo and Sony (PlayStation) is one of the three most popular gaming consoles. One of the main ways these consoles distinguish themselves from their competitors is by categorizing certain games as “exclusives,” where certain games can only be played on a single console. For example, Spiderman can only be played on PlayStation, the Mario games are exclusive to Nintendo, and Halo can only be played on Xbox. Other games, like Grand Theft Auto, Fortnite, and FIFA are offered on multiple platforms, allowing consumers to play the game on whatever console they already own.

Activision Blizzard is a video game holding company, which means the company owns games developed by game development studios. They then make decisions about marketing, creative direction, and console availability for individual games. Some of their most popular games include World of Warcraft, Candy Crush, Overwatch, and one of the most successful game franchises ever, Call of Duty. Readers outside of the gaming space may recognize Activision Blizzard’s name from recent news stories about its toxic workplace culture.

In January 2022, Microsoft announced its intention to purchase Activision Blizzard for $68.7 billion dollars, which would be the largest acquisition in the company’s history. The company stated that its goals were to expand into mobile gaming, as well as make more titles available, especially through Xbox Game Pass, a streaming service for games. After the announcement, critics pointed out two main issues. First, if Microsoft owned Activision Blizzard, it would be able to make the company’s titles exclusive to Xbox. This is especially problematic in relation to the Call of Duty franchise. Not only does the Call of Duty franchise include the top three most popular games of 2022, but it’s estimated that 400 million people play at least one of the games, 42% of whom play on Playstation. Second, if Microsoft owned Activision Blizzard, it could also make its titles exclusive to Xbox Game Pass, which would change the structure of the relatively new cloud streaming market.

The Regulators

Microsoft’s proposed acquisition has drawn scrutiny from the FTC, the European Commission, and the UK Competition and Markets Authority. In what the New York Times has dubbed “a global alignment on antitrust,” the three regulators have pursued a connected strategy. First, the European Commission announced an investigation of the deal in November, signaling that the deal would take time to close. Then, a month later, the FTC sued in its own administrative court, which is more favorable to antitrust claims. In February 2023, the Competition and Markets Authority released provisional findings on the effect of the acquisition on UK markets, writing that the merger may be expected to result in a substantial lessening of competition. Finally, the EU commission also completed its investigation, concluding that the possibility of Microsoft making Activision Blizzard titles exclusives “could reduce competition in the markets for the distribution of console and PC video games, leading to higher prices, lower quality and less innovation for console game distributors, which may, in turn, be passed on to consumers.” Together, the agencies are indicating a new era in antitrust – one that is much tougher on deals than in the recent past.

Specifically, the FTC called out Microsoft on its past acquisitions in its complaint. When Microsoft acquired Bethesda (another video game company, known for games like The Elder Scrolls: Skyrim) in 2021, the company told the European Commission that they would keep titles available on other consoles. After the deal cleared, Microsoft announced that many Bethesda titles, including highly anticipated games like Starfield and Redfall, would be Microsoft exclusives. The FTC used this in its complaint to show that any promises by Microsoft to keep games like Call of Duty available to all consumers could be broken at any time. Microsoft has disputed this characterization, arguing that the company made decisions to make titles exclusive on a “case-by-case basis,” which was in line with what it told the European Commission.

For the current deal, Microsoft has agreed to make Call of Duty available on the Nintendo Switch, and it claims to have made an offer to Sony, guaranteeing the franchise would remain available on PlayStation for ten years. This type of guarantee is known as conduct remedy, which preserves competition through requirements that the merged firm commits to take certain business actions or refrain from certain business conduct going forward. In contrast, structural remedies usually require a company to divest certain assets by selling parts of the business. One example of conduct remedies was in the Live Nation – Ticketmaster merger. The companies agreed not to retaliate against concert venue customers that switched to a different service nor tie sales of ticketing services to concerts it promoted. However, as the recent Taylor Swift ticketing dilemma proves, conduct remedies may not be effective in eliminating anticompetitive behavior.

Conclusion

Microsoft faces an uphill battle with its proposed acquisition. Despite its claims that Xbox does not exercise outsize influence in the gaming industry, the sheer size and potential effects of this acquisition make Microsoft’s claims much weaker. Further, the company faces stricter scrutiny from new regulators in the United States. Assistant Attorney General Jonathan Kanter, who leads the DOJ’s antitrust division, has already indicated that he prefers structural remedies to conduct ones, and Lina Khan, FTC commissioner, is well known for her opposition to big tech companies. If Microsoft wants this deal to succeed, it may have to provide more convincing evidence that it will act differently than its anticompetitive conduct in the past.


The Apathetic Divide: Surrogacy and the Anglo-American Courtroom

Kelso Horne, MJLST Staffer

The State of New York defines Gestational Surrogacy as “a process where one person, who did not provide the egg used in conception, carries a fetus through pregnancy and gives birth to a baby for another person or couple.” The process of surrogacy can be fraught with legal, technical, and moral issues, particularly when the surrogacy is paid for via contract with the surrogate, also called Compensated Gestational Surrogacy (CGS). Until 2020, this kind of contractual paid surrogacy was illegal in the state of New York. That year, it was legalized, and the regulatory regime normalized by the Child-Parent Security Act.  In contrast, the state of Louisiana has one of the harshest gestational surrogacy regimes in the world, outright banning CGS, and requiring both sets of gametes to come from a couple married residing in the state of Louisiana. But these competing regulatory regimes are not replicated across the nation. To the contrary, most states have not passed any laws legalizing or banning CGS or other fertility practices, like the sale of gametes. With sparse case law and frequent legal limbo, the question of “is CGS legal for me?” can be a difficult question for many Americans.

Across the Atlantic, the question used to be an easy one to answer. In 1985 the UK Parliament Enacted the Surrogacy Arrangements Act, which made it an offense to “initiate or take part in any negotiations with the view of making a surrogacy arrangement”, along with some related activities, like compiling information to assist in the creation of surrogacy arrangements. Critically, however, the Act did not criminalize the act of looking to hire a surrogate, or looking to become one, only being a middleman, or publishing advertisements on behalf of those looking to obtain the services of a surrogate. The Human Fertilisation and Embryology Act 1990 defined the mother of a child under UK law as “[t]he woman who is carrying or has carried a child… and no other woman”. In 2001, the Lords Appeal in Ordinary, which acted as the UK’s highest court until 2009, heard the appeal in Briody v. St Helens and Knowsley Area Health Authority. The question before the Lords was one of damages. A woman, rendered infertile as a result of medical negligence, sought £78,267 in order to obtain the services of a surrogate in California, which had legalized CGS in 1993 in the landmark case Johnson v. Calvert. The Lady Justice Hale, speaking for the court, foreclosed the use of CGS in California or elsewhere, as the proposal was “contrary to the public policy of the country”. While she did not entirely dismiss the idea of providing damages to pay for surrogacy procedures, she said it would be permitted only in the case of a voluntary, unpaid surrogate.

Few appellate court judges get to issue an opinion on the same facts twice in their career. In 2020, in one of her final cases prior to retiring, the Lady Justice Hale, now sitting on the UK Supreme Court, which by then had replaced the Lords Appeal in Ordinary, did just that.  In Whittington Hospital NHS Trust (Appellant) v XX, the court determined that a woman who had been rendered infertile as a result of medical negligence could claim damages, including the costs to pay a United States based surrogate to carry her children. CGS, while still entirely illegal in the UK, could now nevertheless provide the basis for damages in a UK court. The Court did note some factual differences between Whittington Hospital and Briody, notably, that the likelihood that a surrogacy arrangement would result in a child was higher in the former. However, the court’s main argument for its opposite ruling was a change in cultural attitude to surrogacy and its role in society, stating “[t]he use of assisted reproduction techniques is now widespread and socially acceptable.”

While admitting that surrogacy was now widely accepted in UK society, the dissent, authored by The Lord Justice Carnwath, nevertheless disagreed with the Court. It argued that the criminal law of the UK remained clearly averse to commercial surrogacy, and that by awarding damages for CGS in California the court misaligned the UK’s civil and criminal law. Thus, the CGS regimes of the UK and the U.S. are now bound together. UK citizens may seek surrogacy arrangements and have them compensated by the UK government through the UK’s National Health Service, but they must use an American “womb”. A financial arrangement which the UK itself deems too unethical to allow inside its own borders is nevertheless legalized and compensated when occurring in other countries. The deeply strange situation is mirrored in the opaque CGS law in the United States itself.

A quick glance at any 50-state review of laws, compiled either by supporters or opponents to commercial surrogacy, paint a similar picture. They show strange ad hoc mixes of case law which often cover ancillary issues or are at least 30 years old. Some scholars have started to publicly discuss the possible ethical pitfalls of “procreative tourism”, but without clear legal rules governing what arrangements are and are not allowed, it becomes difficult to discuss possible solutions. The dangers of this shadow regime were thrown into stark relief by the war in Ukraine, which prior to the Russian invasion was a major source of surrogate mothers. Mothers were paid on average $15,000 per child, which is considerable in a country where, prior to the invasion, the GDP per capita was less than $5,000. The United States needs to determine if it wishes to become a “destination” country for procreative tourism, as the result in Whittington would seem to suggest it is, and whether it wishes to allow its own citizens the opportunity to travel abroad to engage in CGS.

This blog has touched on only a small fraction of the issues which are faced when determining the ideal regulatory regime for surrogacy. However, a lack of discussion, and a failure to acknowledge possible risks leaves us ignorant of what the problems may be, let alone the route to potential solutions. States have largely failed to address the issue since the first CGS baby was born in their borders, usually in the late 1980’s and early 1990’s. It’s time for a serious examination of CGS regulation as it exists, as well as a meaningful discussion about safeguarding the health and wellbeing of those involved in such a transaction. The UK has now done the same, passing the buck without a serious response to the issues surrounding CGS. Regardless of one’s opinion on the results of the Louisiana and New York regulations, potential participants in a surrogacy arraignment in those two states know the boundaries. That should be the case nationwide.


Are Social Media Empires Liable for “Terror” Organizations?

Ray Mestad, MJLST Staffer

The practicality, ease of use, and sheer addictiveness of social media has led to its massive explansion around the world. Approximately 65% of the world uses the internet, and of that group, only 5% does not use social media.[1] So 60% of the world is on social media, around 4.76 billion people.[2] For most, social media is one of the simplest ways to stay connected and communicate with friends, family, and other people in their circle. But along with the growing use of social media, questions have been raised regarding the potential liability social media corporations may have for the content that is posted on their platforms. Recently, lawsuits have been filed against companies like Google, Twitter, and Facebook for allegedly allowing groups accused of terrorism to spread their message or plan on their platforms.[3] The question we are left with is to what extent are social media companies responsible for posts on their sites that lead to violence?

The family of Nohemi Gonzales, an American student killed in Paris during a 2015 Islamic State attack, is suing Google for platforming the Islamic State by allowing them to post videos on Youtube, and then recommending them to people with the Google algorithm.[4] And the family of Nawras Alassaf, a Jordanian citizen killed in a 2017 Istanbul Islamic State attack, is suing Twitter, Google, and Facebook, for not doing more to prevent the organization from using their platform as a communications and messaging tool.[5] Gonzales v. Google and Twitter v. Taamneh will both be presenting their oral arguments to the Supreme Court this month, February 2023.[6]

The legal issues in these cases are rooted in Section 230 of the Communications Decency Act, part of the Telecommunications Act of 1996.[7] 47 U.S.C. 230 intends to protect freedom of expression by protecting intermediaries that publish information posted by users.[8] Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[9] This protects web platforms from liability for the content that users post.

Further, Section 230(c)(2) states that “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[10] This is known as the “Good Samaritan” provision. Like 230(c)(1), Section 230(c)(2) gives internet providers liability protection, allowing them to moderate content in certain circumstances and then providing a safeguard from the free speech claims that would be made against them.[11]

The question is whether or not defendant social media platforms should be shielded from liability for platforming content that has allegedly led to or facilitated violent attacks. In Gonzales, the Justice department stated that although the company is protected against claims for hosting ISIS videos, a claim may be filed against Google for allowing Youtube to provide targeted recommendations of the videos.[12] And in Taamneh, the 9th Circuit agreed with the plaintiffs that there was room for the claim to go forward under the Anti-Terrorism Act because Twitter had generalized knowledge of the Islamic State’s use of their services.[13]

Section 230 has drawn an eclectic mix of critics and supporters. For example, although many conservatives and free speech advocates support the protections of Section 230, there have also been conservatives that oppose the code due to perceived restriction of conservative viewpoints on social media platforms. For example, prominent Republican Josh Hawley from Missouri has come out against the code, stating that the tech platforms ought to be treated as distributors and lose Section 230 protections.[14] In fact, Hawley introduced a piece of legislation opposing Section 230 called the Federal Big Tech Tort Act to impose liability on tech platforms.[15] And on the left, Section 230 is supported by those who believe the voices of the marginalized are protected by 230 and would otherwise be at the whim of tech companies, but opposed by people who fear that the code enables political violence and hate speech.[16]

The Supreme Court has now granted certiorari in both Gonzales and Taamneh. In Gonzales, the plaintiffs are arguing that Section 230 should not protect the actions of Google because the events occurred outside the US, it is preempted by the Justice Against Sponsors of Terrorism Act (JASTA), and the algorithmic recommendations transform Google / Youtube from an interactive computer service to an information content provider.[17] Google is arguing that they should be protected by 230, particularly 230(c)(1).[18] The 9th Circuit stated that although 230 did apply abroad, that JASTA shouldn’t supersede 230. Instead, 230 and JASTA should run parallel to each other. The 9th Circuit further stated that the claims based on revenue sharing (rather than ad targeting) should be dismissed. They did not think Google was contributing to terrorism, because they were motivated by financial enrichment rather than ideology, and affirmed the dismissal, partially because there was not clear enough information of how much support Google had provided to ISIS.[19] Future decisions regarding this case will implicate things like whether algorithmic recommendations should apply to 230.[20]

In Taamneh, the defendants argued that there was no proximate cause, as well as arguing about the inapplicability of Section 230.[21]  Unlike in GonzalesTaamneh had adequately stated a claim for aiding and abetting because the social media companies had more explicit knowledge of how their platforms were being used by these groups. The Taamneh dismissal was reversed. The Supreme Court review of this case will have implications on what it means to support or have a relationship with a group via a social media platform. In both of these cases, fears regarding the scope of 230 were expressed, which could reflect poorly on its applicability going forward.[24]   

Gonzales and Taamneh will hit the Supreme Court soon. If 230 is restricted, it would enable greater free speech but risks exposing more people to harms like hate speech or violence.  However, if 230 is preserved as is, it could restrict the accessibility and openness that has made the internet what it is today. Whichever decision is made, there will be massive implications for what the internet looks like in the future.

Notes

[1] https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/#:~:text=The%20number%20of%20social%20media,growth%20of%20%2B137%20million%20users.

[2] Id.

[3] https://apnews.com/article/islamic-state-group-us-supreme-court-technology-france-social-media-6bee9b5adf33dd15ee64b0d4d4e5ec78

[4] Id.

[5] Id.

[6] https://www.washingtonpost.com/politics/2023/01/03/2023-is-poised-be-landmark-year-tech-legal-bouts/

[7] https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996

[8] https://www.eff.org/issues/cda230

[9] https://casetext.com/statute/united-states-code/title-47-telecommunications/chapter-5-wire-or-radio-communication/subchapter-ii-common-carriers/part-i-common-carrier-regulation/section-230-protection-for-private-blocking-and-screening-of-offensive-material

[10] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[11] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[12] https://www.washingtonpost.com/politics/2022/12/09/tech-critics-urge-supreme-court-narrow-section-230/

[13] https://knightcolumbia.org/blog/twitter-v-taamneh-in-the-supreme-court-whats-at-stake

[14] Supa Washington Post

[15] https://www.hawley.senate.gov/hawley-files-gonzalez-v-google-amicus-brief-supreme-court-challenging-big-techs-section-230

[16] Supa Washington Post

[17] https://www.lawfareblog.com/supreme-court-grants-certiorari-gonzalez-v-google-and-twitter-v-taamneh-overview

[18] Id.

[19] Id.

[20]

[21] Id.

[22] Id.

[23] Id.

[24]Id.


Data Privacy Regulations in 2023: Is the New Standard Burdensome?

Yolanda Li, MJLST Staffer

Beginning in 2023, businesses will see enhanced regulations on data privacy. There has been an increase in legal requirements for company-held data in protection of companies’ customers as a number of proposed data security laws and regulations came into effect in 2023. Specifically, the FTC Safeguards Rule and the NIS2 Directive.

The FTC Safeguards Rule

The FTC Safeguards Rule came into force in December 2022. The FTC requires non-banking financial institutions “to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe.”[1] Non-banking financial institutions affected by this rule include mortgage brokers, motor vehicle dealers, and payday lenders. The Safeguards Rule is promulgated under the Gramm-Leach-Bliley Act of 1999, which requires financial institutions to “explain their information-sharing practices to their customers and to safeguard sensitive data.”[2] Financial institutions include companies that offer consumer financial products or services like loans, insurance, and financial or investment advice.[3] Specifically, the rule required that the covered financial institutions “designate a qualified individual to oversee their information security program, develop a written risk assessment, limit and monitor who can access sensitive customer information, encrypt all sensitive information, train security personnel, develop an incident response plan, periodically assess the security practices of service providers, and implement multi-factor authentication or another method with equivalent protection for any individual accessing customer information.”

One specific question that arises is whether the FTC Safeguards Rule will truly elevate data privacy standards. On its face the FTC Safeguards Rule does not run counter to the FTC’s mission of protecting consumers. However, the economic cost and effect behind the rule is debatable. One concern is that the rule may impose substantial costs, especially on small businesses, as the new burdens will render costs that may be unbearable for small businesses with less capital than large companies. According to Commissioner Christine S. Wilson, although financial institutions are already implementing many of the requirements under the rule, or have sophisticated programs that are easily adaptable to new obligations, there are still large burdens underestimated by the FTC Safeguards Rule.[4] Specifically, labor shortages have hampered efforts by financial institutions to implement information security systems. Supply chain issues caused delays in obtaining equipment for updating information systems. What is important to note is, according to Commissioner Wilson, most of these factors are outside the control of the financial institutions. Implementing a heightened standard would thus cause unfairness, especially to small financial institutions who have even more trouble obtaining the necessary equipment during times of supply chain and labor shortages.

Recognizing such difficulties, the FTC did offer a certain extent of leniency for implementation of the rule. Specifically, the FTC extended the deadline by six months, primarily due to supply chain issues that may result in delays and shortage of qualified personnel to implement information security programs. This extension is beneficial to the Rule because it offers the covered financial institutions time for adjustment and compliance.

Another concern that the FTC Safeguards Rule has raised is that the mandates will not result in a significant reduction in data security risks in protecting customers. The answer to this question is still uncertain as the FTC Safeguards Rule just came into effect, and the extension pushes out implementation even farther. One thing to note, however, is that during the rule-making process the FTC sought comments on the proposed Safeguards Rule and during that time extended the deadline for the public to submit comments to changes by 60 days in.[5] This fact may show that the FTC took careful consideration of how to most effectively reduce data security risks by giving the public ample time to weigh in.

NIS2 Directive

A corresponding law is the NIS2 Directive by the EU that came into force on January 16, 2023. This EU-wide legislation provides a variety of legal measures to boost cybersecurity. Specifically, it requires member states to be appropriately equipped with response and information systems, set up a Corporation Group to facilitate corporate exchange of information among member states, and ensure a culture of security that relies heavily on infrastructures, including financial market infrastructure.[6] The Directive also contains a variety of security and notification requirements for service providers to comply with. The NIS2 Directive echoes the FTC Safeguards Rule to a large extent regarding the elevated standard of cybersecurity measures.

However, the NIS2 Directive contains a different measure by implementing duties onto the European Union Agency for Cybersecurity (ENISA) itself. The Directive designates that ENISA assists Member States and the Corporation Groups set up under the Directive by “identifying good practices in the Member States regarding the implementation of the NIS directive, supporting the EU-wide reporting process for cybersecurity incidents, by developing thresholds, templates and tools, agreeing on common approaches and procedures, and helping Member States to address common cybersecurity issues.”[7] The Directive ordering the agency itself to facilitate the carrying out of the Directive may add to the likelihood of success. Although the outcome is uncertain, primarily because of the broad language of the Directive, at least burdens on financial institutions will be lessened to a certain extent. What distinguishes the NIS2 Directive from the FTC Safeguards Rule is that the Member States are given 21 months to transpose to their national legislative framework.[8] This time offers more flexibility as compared to the extension of the FTC Safeguards Rule. As the Directive passes through the legislative framework, more time will be allowed for financial institutions to prepare and respond to the proposed changes.

In summary, data privacy laws are tightening up globally, and the United States should look to and learn from the successes and failures of the EU’s Directive as both countries’ are attempting to do regulate a similar industry. That being said, regardless of the EU, financial institutions in the United States must begin paying attention to and complying with the FTC Safeguards Rule. Though the outcome of the Rule is uncertain, the 6-month extension will at least offer a certain degree of flexibility.

Notes

[1]https://www.ftc.gov/news-events/news/press-releases/2022/11/ftc-extends-deadline-six-months-compliance-some-changes-financial-data-security-rule; 16 CFR 314.

[2] https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act.

[3] Id.

[4] Concurring Statement of Commissioner Christine S. Wilson, Regarding Delaying the Effective Date of Certain Provisions of the Recently Amended Safeguards Rule (Nov 2022).

[5] https://www.ftc.gov/news-events/news/press-releases/2019/05/ftc-extends-comment-deadline-proposed-changes-safeguards-rule.

[6] https://digital-strategy.ec.europa.eu/en/policies/nis2-directive.

[7] https://www.enisa.europa.eu/topics/cybersecurity-policy/nis-directive-new#:~:text=On%2016%20January%202023%2C%20the,cyber%20crisis%20management%20structure%20(CyCLONe).

[8] Id.

 


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.