Articles by mjlst

EJScreen: The Environmental Justice Tool That You Didn’t Know You Needed

Emma Ehrlich, Carlisle Ghirardini, MJLST Staffer

What is EJScreen?

EJScreen was developed by the Environmental Protection Agency (“EPA”) in 2010, 16 years after President Clinton’s Executive Order 12898 required federal agencies to begin keeping data regarding “environmental and human health risks borne by populations identified by race, national origin or income.” The program has been available to the public through the EPA’s website since 2015 and is a mapping tool that allows users to look at specific geographic locations and set overlays that show national percentiles for categories such as income, people of color, pollution, health disparities, etc. Though the EPA warns that EJScreen is simply a screening tool and has its limits, the EPA uses the program in “[i]nforming outreach and engagement practices, [i]mplementing aspects of …permitting, enforcement, [and] compliance, [d]eveloping retrospective reports of EPA work, [and] [e]nhancing geographically based initiatives.”

As the EPA warns on its website, EJScreen does not contain all pertinent information regarding environmental justice and other data should be collected when studying specific areas. However, EJScreen is still being improved and was updated to EJScreen 2.0 in 2022 to account for more data sets, including data on which areas lack access to food, broadband, and medical services, as well as health disparities such as asthma and life expectancy.

Current Uses

EJScreen software is now being used to evaluate the allocation of federal funding. In February of this year, the EPA announced that it will be allocating $1 billion of funding from President Biden’s Bipartisan Infrastructure Law to Superfund cleanup projects such as cleanups of sites containing retired mines, landfills, and processing and manufacturing plants. The EPA said that 60% of new projects are in locations that EJScreen indicated were subject to environmental justice concerns.

EJScreen is also used to evaluate permits. The EPA published its own guidance in August of 2022 to address environmental justice permitting procedures. The guidance encourages states and other recipients of financial assistance from the EPA to use EJScreen as a “starting point” when looking to see if a project whose permit is being considered may conflict with environmental justice goals. The EPA believes this will “make early discussions more meaningful and productive and add predictability and efficiency to the permitting process.” If an early EJScreen brings a project into question, the EPA instructs permitters to consider additional data before making a permitting decision.

Another use of EJScreen is in the review of Title VI Civil Rights Act Complaints. Using the authority provided by Title VI, the EPA has promulgated rules that prohibit any agency or group that is receiving federal funding from the EPA from functioning in a discriminatory way based on race, color, or national origin. The rules also enable people to submit Title VI complaints directly to the EPA when they believe a funding recipient is acting in a discriminatory manner. If it is warranted by the complaint, the EPA will conduct an investigation. Attorneys that have reviewed EPA response letters expressing its decision to conduct an investigation based on a complaint have noted that the EPA often cites EJScreen when explaining why they decided to move forward with an investigation.

In October of 2022, the EPA sent a “Letter of Concern” to the Louisiana Department of Environmental Quality (“LDEQ”) and the Louisiana Department of Health stating that an initial investigation suggests that the two departments have acted in ways that had “disparate adverse impacts on Black residents” when issuing air permits or informing the public of health risks. When discussing a nearby facility’s harmful health effects on residents, the EPA cites data from EJScreen in concluding that the facility is much more likely to have effects on black residents of Louisiana compared to non-black residents. The letter also touches on incorrect uses of EJScreen in saying that LDEQ’s conclusion that a proposed facility would not affect surrounding communities was misleading because the LDEQ used EJScreen to show that there were no residents within a mile of the proposed facility but ignored a school located only 1.02 miles away from the proposed location.

Firms such as Beveridge & Diamond have recognized the usefulness of this technology. They urge industry decision makers to use this free tool, and others similar to it, to preemptively consider environmental justice issues that their permits and projects may face when being reviewed by the EPA or local agencies.

Conclusion

In conclusion, EJScreen has the potential to be a useful tool, especially as the EPA continues to update it with data for additional demographics. However, users of the software should heed EPA’s warning that this is simply a screening tool. It is likely best used to rule out locations for certain projects, rather than be solely relied on for approving projects in certain locations, which requires more recent data to be collected.

Lastly, EJScreen is just one of many environmental justice screening tools being used and developed. Multiple states have been developing their own screening programs, and there is research showing that using state screening software may be more beneficial than national software. An environmental justice screening tool was also developed by the White House Council on Environmental Quality in 2022. Its Climate and Economic Justice Screening Tool is meant to assist the government in assigning federal funding to disadvantaged communities. The consensus seems to be that all available screening tools are helpful in at least some way and should be consulted by funding recipients and permit applicants in the early rounds of their decision making processes.


A Manhattan Federal Jury Found Trademark Rights to Extend to the Metaverse. Why Should You Care?

Carlisle Ghirardini, MJLST Staffer

Earlier this month, the federal court in the Southern District of New York issued an opinion regarding a luxury fashion brand’s trademark rights in the Metaverse – the first trial verdict concerning trademarks in non-fungible tokens (NFTs).[1] The suit was brought in January of 2022 by the Parisian fashion giant Hermès when a digital artist created NFTs of the brand’s iconic “Birkin bag” and made a profit selling these “MetaBirkins.”[2]

The key question in the suit came down to whether the NFT was likened to art, which would receive First Amendment protection, or a consumer product, which would be subject to trademark infringement liabilities.[3] A federal grand jury found the artist’s use of the Birkin name and style to be more commercial than artistic in nature, and, therefore, potentially infringing on Hermès’ trademarks depending on public perception.[4]

Trademark infringement is the unauthorized use of a mark in a way that would confuse a consumer as to the source of the product or service connected to the mark.[5] Surveys and social media evidence in this case showed confusion among NFT consumers as to Hermès’ involvement with the MetaBirkins, which led the jury to find the use of the mark to be infringing and a capitalization of the Hermès brand’s goodwill for profit.[6] Hermès was awarded $133,000 in total damages – a small win for the fashion powerhouse, but a huge win for brand owners across many different industries who now know their trademark rights may be protectable in the Metaverse.[7]

I don’t use or understand the Metaverse – why should I care about this decision?

Even for those who don’t know what an NFT is, this decision to extend trademarks rights to the Metaverse is still important. First, it is well known that many brands are now registering trademarks in the Metaverse, so if a consumer sees a brand in this realm, there is a higher likelihood of confusion of association with that virtual good or service. If people assume a connection between a brand and the illegal use of its mark, the brand is at risk of significant damage. For example, if an unauthorized user opened a Metaverse McDonald’s which gave out racy or controversial happy meal prizes, McDonald’s could face serious backlash if its consumers believed McDonald’s to be condoning such activities.[8]Although it seems like this connection may be less convincing or harmful for a big brand like McDonald’s, it was enough to compel Hermès to protect the integrity of their brand and their customers.[9] It is not only big brands that can be victims of such infringement, however. While it is easy to understand why someone would take advantage of a more recognized company due to greater traffic, this could easily happen to smaller brands we know and love. If the little coffee shop chain you frequent is hurt by such virtual infringement, perhaps by a local competitor, it could run them out of business. Connecting a brand in the Metaverse to products or values they are not aligned with could have damaging real world effects.[10]

Just as brand exposure in the Metaverse can cause harm, it also has the potential to benefit businesses. Such virtual brand display, which is cheaper than buying advertising or opening a new brick and mortar store, can translate to more business in the real world.[11] Brands have started creating virtual experiences that have driven in-store sales and served as powerful marketing. Vans shoe and skateboard company, for example, made a Metaverse skatepark in which users could earn points when “boarding” that were redeemable for discounts inside real Vans stores.[12] Chipotle released a burrito-making game that yielded “burrito bucks” for exchange in their actual restaurants.[13] As use of NFTs grows, and as brands recognize the ramifications of the Hermès lawsuit, we will likely continue to see more trademarks used in the Metaverse. Brand owners should keep in mind the dangers of failing to sufficiently protect their trademarks in the virtual space and the potential for benefits if used strategically.

Notes

[1] Reed Clancy and Alexander Curylo, Verdict Reached in MetaBirkin NFT Case, AIPLA NEWSTAND (Feb. 9, 2023), https://www.lexology.com/library/detail.aspx?g=0faf6e67-38b4-4add-971d-badd08199c0c&utm_source=Lexology+Daily+Newsfeed&utm_medium=HTML+email+-+Body+-+General+section&utm_campaign=AIPLA+2013+subscriber+daily+feed&utm_content=Lexology+Daily+Newsfeed+2023-02-13&utm_term=.

[2] Muzamil Abdul Huq et al., Hermès Successfully Defends its Trademark in the Metaverse, AIPLA NEWSTAND (Feb. 9, 2023), https://www.lexology.com/library/detail.aspx?g=6dba3b12-030d-41ff-98c6-1c2aad6468ce&utm_source=Lexology+Daily+Newsfeed&utm_medium=HTML+email+-+Body+-+General+section&utm_campaign=AIPLA+2013+subscriber+daily+feed&utm_content=Lexology+Daily+Newsfeed+2023-02-13&utm_term=.

[3] Id.

[4] Id.

[5] About Trademark Infringement, U.S. PATENT AND TRADEMARK OFFICE, https://www.uspto.gov/page/about-trademark-infringement (last visited Feb. 17, 2023).

[6] Huq et al., Hermès Successfully Defends its Trademark in the Metaverse, AIPLA NEWSSTAND (Feb. 9, 2023).

[7] Id.

[8] Joanna Fantozzi, Why Every Restaurant Operator Should Care About NFTs and the Metaverse Right Now, NATION’SRESTAURANT NEWS (Feb. 25, 2022) https://www.nrn.com/technology/why-every-restaurant-operator-should-care-about-nfts-and-metaverse-right-now.

[9] Zachary Small, Hermès Wins MetaBirkins Lawsuit; Jurors Not Convinced NFTs Are Art, N.Y. TIMES (Feb. 8, 2023), https://www.nytimes.com/2023/02/08/arts/hermes-metabirkins-lawsuit-verdict.html.

[10] Fantozzi, Why Every Restaurant Operator Should Care About NFTs and the Metaverse Right Now, NATION’SRESTAURANT NEWS (Feb. 25, 2022).

[11] Id.

[12] Andrew Hanson, Understanding the Metaverse and its Impact on the Future of Digital Marketing, CUKER (Mar. 29, 2022), https://www.cukeragency.com/understanding-metaverse-and-its-impact-future-digi/.

[13] Dani James, How Retailers are Connecting the Metaverse to real World Sales and Revenues, RETAILDIVE (Nov. 14, 2022), https://www.retaildive.com/news/retailers-connecting-metaverse-roblox-real-world-revenue/636209/.


Hazardous Train Derailment: How a Poor Track Record for Private Railway Company May Impact Negligence Lawsuit Surrounding Major Incident

Annelise Couderc, MJLST Staffer

The Incident

On Friday, February 3rd a train with about 150 cars, many carting hazardous chemicals, derailed in East Palestine, Ohio. The derailment resulted in the leakage and combustion of an estimated 50 train cars containing chemicals hazardous to both humans and the environment. The mayor of East Palestine, Ohio initially evacuated the city, and neighboring towns were told to stay indoors with residents being told they could return five days following the explosion. According to a member of the National Transportation Safety Board, 14 cars containing multiple hazardous chemicals including vinyl chloride, a chemical in plastic products which is associated with increased risk of liver cancer and cancer generally, were “exposed to fire,” combusted into the air which could then be inhaled by residents or leach into the environment. There have been reports by residents of foul smells and headaches since the incident, and locals have reported seeing dead fish in waterways.

The train and railroad in question are owned and operated by Norfolk Southern, a private railway company. Norfolk Southern transports a variety of materials, but is known for its transportation of coal through the East and Midwest regions of the country. In order to prevent a large explosion with the chemicals remaining in the train cars, Norfolk Southern conducted a “controlled release” of the chemicals discharging “potentially deadly fumes into the air” on Monday, February 6th. While the controlled release was likely immediately necessary for safety purposes, exposure to vinyl chloride as a gas can be very dangerous, leading to headaches, nausea, liver cancer, and birth defects.

Government and Norfolk Southern Responds

Following the derailment and fires, a variety of governmental authorities have converged to tackle the issue, in addition to Norfolk Southern. The Environmental Protection Agency (EPA) and Norfolk Southern are monitoring air-quality, and giving guidance to determine when investigators and fire fighters may enter the scene safely. In a joint statement on February 8th, the Governors of Ohio and Pennsylvania, as well as East Palestine’s Fire Chief, announced that evacuated residents could return to their homes. As an act of good faith Norfolk Southern enlisted an independent contractor to work with local and federal officials to test air and water quality, and pledged $25,000 to the American Red Cross and its shelters to help residents. The Ohio National Guard has also been brought onto the scene.

As more information is released, things are heating up in the press as reporters try to learn more about what happened. In a press conference on February 8th with Ohio’s governor, Mike DeWine, the commander of the Ohio National Guard pushed a cable news reporter who refused to stop his live broadcast after asked by authorities and was subsequently arrested and held in jail for five hours. DeWine denies authorizing the arrest, and a Pentagon official has come out condemning the behavior as unacceptable. The Ohio attorney general will lead an investigation into the arrest.

Lawsuit Filed Alleges Negligence

Norfolk Southern’s history regarding brake safety as well as general operational changes in the railroad sector will perhaps play a factor in the lawsuit recently filed in response to the incident. In East Palestine, Ohio, residents and a local business owner are alleging negligence in a lawsuit against Norfolk Southern in federal court. Union organizers have expressed concerns that operating changes and cost-cutting measures like the elimination of 1/3 of workers in the last six years have resulted in less thorough inspection and less preventative maintenance. Although railroads are considered the safest form of transporting hazardous chemicals, Federal Railroad Administration (FRA) data shows that hazardous chemicals were released in 11 accidents in 2022, and 20 in both 2020 and 2018. Recently, there has been an uptick in derailments, and although most occur in remote locations, train car derailments have in fact killed people in the past.

The class-action lawsuit alleges negligence against Norfolk Southern for “failing to maintain and inspect its tracks; failing to maintain and inspect its rail cars; failing to provide appropriate instruction and training to its employees; failing to provide sufficient employees to safely and reasonably operate its trains; and failing to reasonably warn the general public.” The plaintiffs allege the company should have known of the dangers posed, and therefore breached their duty to the public.

Specifically relevant to this accident may be Norfolk Southern’s lobbying efforts against the mandatory use of Electronically Controlled Pneumatic (ECP) brakes. In 2014, likely in response to increased incidents, the Obama administration “proposed improving safety regulations for trains carrying petroleum and other hazardous materials,” which included brake improvement. The 2015 Fixing America’s Surface Transportation (FAST) Act required the Department of Transportation (DOT) to test ECP braking, and the Government Accountability Office to calculate the costs and benefits of ECP braking.[1] The U.S. Government Accountability Office (GAO) conducted a cost benefit test on the ECP braking, and found the costs outweighed the benefits.[2] The FRA, the Pipeline and Hazardous Materials Safety Administration (PHMSA), and DOT subsequently abandoned the ECP brake provision of the regulation in 2017. The move followed a change in administration and over $6 million in lobbying money towards GOP politicians and the Trump administration by the American Association of Railroads, a lobbying group of which Norfolk Southern is a dues-paying member.

Despite bragging about their use of ECP brakes in 2007 in their quarterly report, Norfolk Southern’s lobbying group opposed mandatory ECP brakes, stating “In particular, the proposals for significantly more stringent speed limits than in place today and electronically controlled pneumatic (ECP) brakes could dramatically affect the fluidity of the railroad network and impose tremendous costs without providing offsetting safety benefits.” Although the type of brakes on the train in East Palestine is unknown as of now, a former FRA senior official told a news organization that ECP brakes would have reduced the severity of the accident. Whether or not using ECP braking while hauling hazardous materials constitutes negligence, despite the federal government finding they are not beneficial enough to make it mandatory, the fact that Norfolk Southern opposed its implementation may still influence the litigation.

Although the current lawsuit filed alleges negligence against Norfolk Southern, the private company, it is perhaps possible to approach the legal debate from an agency perspective. Did the PMHSA and FRA permissibly interpret FAST in failing to include ECP braking requirements when they were explicitly mentioned in the FAST text? Did the agencies come to an acceptable conclusion about ECP braking based on the data? If a court were to find the agencies’ decisions were outside of the scope of the authority granted to them by FAST, or that the decision was arbitrary and capricious, the agencies could be forced to reevaluate the regulation regarding ECP braking. Congress could also pass more specific legislation in response, to increase safety measures to prevent something like this from happening again.

The events are still unfolding from the train derailment in Ohio, and there are still many unknown variables. It will be interesting to see how the facts unfold, and how/if residents are about to recoup their losses and recover from the emotional distress this event undoubtedly caused.

Notes

[1] Regulations.gov, regulations.gov (search in search bar for “phmsa-2017-0102”; then choose “Electronically Controlled Pneumatic Braking- Updated Regulatory Impact Analysis”; then click “download.”)

[2] Regulations.gov, regulations.gov (search in search bar for “phmsa-2017-0102”; then choose “Technical Corrections to the Electronically Controlled Pneumatic Braking Final Updated RIA December 2017”; then click “download.”)


Call of Regulation: How Microsoft and Regulators Are Battling for the Future of the Gaming Industry

Caroline Moriarty, MJLST Staffer

In January of 2022 Microsoft announced its proposed acquisition of Activision Blizzard, a video game company, promising to “bring the joy and community of gaming to everyone, across every device.” However, regulators in the United States, the EU, and the United Kingdom have recently indicated that they may block this acquisition due to its antitrust implications. In this post I’ll discuss the proposed acquisition, its antitrust concerns, recent actions from regulators, and prospects for the deal’s success.

Background

Microsoft, along with making the Windows platform, Microsoft Office suite, Surface computers, cloud computing software, and of new relevance, Bing, is a major player in the video game space. Microsoft owns Xbox, which along with Nintendo and Sony (PlayStation) is one of the three most popular gaming consoles. One of the main ways these consoles distinguish themselves from their competitors is by categorizing certain games as “exclusives,” where certain games can only be played on a single console. For example, Spiderman can only be played on PlayStation, the Mario games are exclusive to Nintendo, and Halo can only be played on Xbox. Other games, like Grand Theft Auto, Fortnite, and FIFA are offered on multiple platforms, allowing consumers to play the game on whatever console they already own.

Activision Blizzard is a video game holding company, which means the company owns games developed by game development studios. They then make decisions about marketing, creative direction, and console availability for individual games. Some of their most popular games include World of Warcraft, Candy Crush, Overwatch, and one of the most successful game franchises ever, Call of Duty. Readers outside of the gaming space may recognize Activision Blizzard’s name from recent news stories about its toxic workplace culture.

In January 2022, Microsoft announced its intention to purchase Activision Blizzard for $68.7 billion dollars, which would be the largest acquisition in the company’s history. The company stated that its goals were to expand into mobile gaming, as well as make more titles available, especially through Xbox Game Pass, a streaming service for games. After the announcement, critics pointed out two main issues. First, if Microsoft owned Activision Blizzard, it would be able to make the company’s titles exclusive to Xbox. This is especially problematic in relation to the Call of Duty franchise. Not only does the Call of Duty franchise include the top three most popular games of 2022, but it’s estimated that 400 million people play at least one of the games, 42% of whom play on Playstation. Second, if Microsoft owned Activision Blizzard, it could also make its titles exclusive to Xbox Game Pass, which would change the structure of the relatively new cloud streaming market.

The Regulators

Microsoft’s proposed acquisition has drawn scrutiny from the FTC, the European Commission, and the UK Competition and Markets Authority. In what the New York Times has dubbed “a global alignment on antitrust,” the three regulators have pursued a connected strategy. First, the European Commission announced an investigation of the deal in November, signaling that the deal would take time to close. Then, a month later, the FTC sued in its own administrative court, which is more favorable to antitrust claims. In February 2023, the Competition and Markets Authority released provisional findings on the effect of the acquisition on UK markets, writing that the merger may be expected to result in a substantial lessening of competition. Finally, the EU commission also completed its investigation, concluding that the possibility of Microsoft making Activision Blizzard titles exclusives “could reduce competition in the markets for the distribution of console and PC video games, leading to higher prices, lower quality and less innovation for console game distributors, which may, in turn, be passed on to consumers.” Together, the agencies are indicating a new era in antitrust – one that is much tougher on deals than in the recent past.

Specifically, the FTC called out Microsoft on its past acquisitions in its complaint. When Microsoft acquired Bethesda (another video game company, known for games like The Elder Scrolls: Skyrim) in 2021, the company told the European Commission that they would keep titles available on other consoles. After the deal cleared, Microsoft announced that many Bethesda titles, including highly anticipated games like Starfield and Redfall, would be Microsoft exclusives. The FTC used this in its complaint to show that any promises by Microsoft to keep games like Call of Duty available to all consumers could be broken at any time. Microsoft has disputed this characterization, arguing that the company made decisions to make titles exclusive on a “case-by-case basis,” which was in line with what it told the European Commission.

For the current deal, Microsoft has agreed to make Call of Duty available on the Nintendo Switch, and it claims to have made an offer to Sony, guaranteeing the franchise would remain available on PlayStation for ten years. This type of guarantee is known as conduct remedy, which preserves competition through requirements that the merged firm commits to take certain business actions or refrain from certain business conduct going forward. In contrast, structural remedies usually require a company to divest certain assets by selling parts of the business. One example of conduct remedies was in the Live Nation – Ticketmaster merger. The companies agreed not to retaliate against concert venue customers that switched to a different service nor tie sales of ticketing services to concerts it promoted. However, as the recent Taylor Swift ticketing dilemma proves, conduct remedies may not be effective in eliminating anticompetitive behavior.

Conclusion

Microsoft faces an uphill battle with its proposed acquisition. Despite its claims that Xbox does not exercise outsize influence in the gaming industry, the sheer size and potential effects of this acquisition make Microsoft’s claims much weaker. Further, the company faces stricter scrutiny from new regulators in the United States. Assistant Attorney General Jonathan Kanter, who leads the DOJ’s antitrust division, has already indicated that he prefers structural remedies to conduct ones, and Lina Khan, FTC commissioner, is well known for her opposition to big tech companies. If Microsoft wants this deal to succeed, it may have to provide more convincing evidence that it will act differently than its anticompetitive conduct in the past.


The Apathetic Divide: Surrogacy and the Anglo-American Courtroom

Kelso Horne, MJLST Staffer

The State of New York defines Gestational Surrogacy as “a process where one person, who did not provide the egg used in conception, carries a fetus through pregnancy and gives birth to a baby for another person or couple.” The process of surrogacy can be fraught with legal, technical, and moral issues, particularly when the surrogacy is paid for via contract with the surrogate, also called Compensated Gestational Surrogacy (CGS). Until 2020, this kind of contractual paid surrogacy was illegal in the state of New York. That year, it was legalized, and the regulatory regime normalized by the Child-Parent Security Act.  In contrast, the state of Louisiana has one of the harshest gestational surrogacy regimes in the world, outright banning CGS, and requiring both sets of gametes to come from a couple married residing in the state of Louisiana. But these competing regulatory regimes are not replicated across the nation. To the contrary, most states have not passed any laws legalizing or banning CGS or other fertility practices, like the sale of gametes. With sparse case law and frequent legal limbo, the question of “is CGS legal for me?” can be a difficult question for many Americans.

Across the Atlantic, the question used to be an easy one to answer. In 1985 the UK Parliament Enacted the Surrogacy Arrangements Act, which made it an offense to “initiate or take part in any negotiations with the view of making a surrogacy arrangement”, along with some related activities, like compiling information to assist in the creation of surrogacy arrangements. Critically, however, the Act did not criminalize the act of looking to hire a surrogate, or looking to become one, only being a middleman, or publishing advertisements on behalf of those looking to obtain the services of a surrogate. The Human Fertilisation and Embryology Act 1990 defined the mother of a child under UK law as “[t]he woman who is carrying or has carried a child… and no other woman”. In 2001, the Lords Appeal in Ordinary, which acted as the UK’s highest court until 2009, heard the appeal in Briody v. St Helens and Knowsley Area Health Authority. The question before the Lords was one of damages. A woman, rendered infertile as a result of medical negligence, sought £78,267 in order to obtain the services of a surrogate in California, which had legalized CGS in 1993 in the landmark case Johnson v. Calvert. The Lady Justice Hale, speaking for the court, foreclosed the use of CGS in California or elsewhere, as the proposal was “contrary to the public policy of the country”. While she did not entirely dismiss the idea of providing damages to pay for surrogacy procedures, she said it would be permitted only in the case of a voluntary, unpaid surrogate.

Few appellate court judges get to issue an opinion on the same facts twice in their career. In 2020, in one of her final cases prior to retiring, the Lady Justice Hale, now sitting on the UK Supreme Court, which by then had replaced the Lords Appeal in Ordinary, did just that.  In Whittington Hospital NHS Trust (Appellant) v XX, the court determined that a woman who had been rendered infertile as a result of medical negligence could claim damages, including the costs to pay a United States based surrogate to carry her children. CGS, while still entirely illegal in the UK, could now nevertheless provide the basis for damages in a UK court. The Court did note some factual differences between Whittington Hospital and Briody, notably, that the likelihood that a surrogacy arrangement would result in a child was higher in the former. However, the court’s main argument for its opposite ruling was a change in cultural attitude to surrogacy and its role in society, stating “[t]he use of assisted reproduction techniques is now widespread and socially acceptable.”

While admitting that surrogacy was now widely accepted in UK society, the dissent, authored by The Lord Justice Carnwath, nevertheless disagreed with the Court. It argued that the criminal law of the UK remained clearly averse to commercial surrogacy, and that by awarding damages for CGS in California the court misaligned the UK’s civil and criminal law. Thus, the CGS regimes of the UK and the U.S. are now bound together. UK citizens may seek surrogacy arrangements and have them compensated by the UK government through the UK’s National Health Service, but they must use an American “womb”. A financial arrangement which the UK itself deems too unethical to allow inside its own borders is nevertheless legalized and compensated when occurring in other countries. The deeply strange situation is mirrored in the opaque CGS law in the United States itself.

A quick glance at any 50-state review of laws, compiled either by supporters or opponents to commercial surrogacy, paint a similar picture. They show strange ad hoc mixes of case law which often cover ancillary issues or are at least 30 years old. Some scholars have started to publicly discuss the possible ethical pitfalls of “procreative tourism”, but without clear legal rules governing what arrangements are and are not allowed, it becomes difficult to discuss possible solutions. The dangers of this shadow regime were thrown into stark relief by the war in Ukraine, which prior to the Russian invasion was a major source of surrogate mothers. Mothers were paid on average $15,000 per child, which is considerable in a country where, prior to the invasion, the GDP per capita was less than $5,000. The United States needs to determine if it wishes to become a “destination” country for procreative tourism, as the result in Whittington would seem to suggest it is, and whether it wishes to allow its own citizens the opportunity to travel abroad to engage in CGS.

This blog has touched on only a small fraction of the issues which are faced when determining the ideal regulatory regime for surrogacy. However, a lack of discussion, and a failure to acknowledge possible risks leaves us ignorant of what the problems may be, let alone the route to potential solutions. States have largely failed to address the issue since the first CGS baby was born in their borders, usually in the late 1980’s and early 1990’s. It’s time for a serious examination of CGS regulation as it exists, as well as a meaningful discussion about safeguarding the health and wellbeing of those involved in such a transaction. The UK has now done the same, passing the buck without a serious response to the issues surrounding CGS. Regardless of one’s opinion on the results of the Louisiana and New York regulations, potential participants in a surrogacy arraignment in those two states know the boundaries. That should be the case nationwide.


Are Social Media Empires Liable for “Terror” Organizations?

Ray Mestad, MJLST Staffer

The practicality, ease of use, and sheer addictiveness of social media has led to its massive explansion around the world. Approximately 65% of the world uses the internet, and of that group, only 5% does not use social media.[1] So 60% of the world is on social media, around 4.76 billion people.[2] For most, social media is one of the simplest ways to stay connected and communicate with friends, family, and other people in their circle. But along with the growing use of social media, questions have been raised regarding the potential liability social media corporations may have for the content that is posted on their platforms. Recently, lawsuits have been filed against companies like Google, Twitter, and Facebook for allegedly allowing groups accused of terrorism to spread their message or plan on their platforms.[3] The question we are left with is to what extent are social media companies responsible for posts on their sites that lead to violence?

The family of Nohemi Gonzales, an American student killed in Paris during a 2015 Islamic State attack, is suing Google for platforming the Islamic State by allowing them to post videos on Youtube, and then recommending them to people with the Google algorithm.[4] And the family of Nawras Alassaf, a Jordanian citizen killed in a 2017 Istanbul Islamic State attack, is suing Twitter, Google, and Facebook, for not doing more to prevent the organization from using their platform as a communications and messaging tool.[5] Gonzales v. Google and Twitter v. Taamneh will both be presenting their oral arguments to the Supreme Court this month, February 2023.[6]

The legal issues in these cases are rooted in Section 230 of the Communications Decency Act, part of the Telecommunications Act of 1996.[7] 47 U.S.C. 230 intends to protect freedom of expression by protecting intermediaries that publish information posted by users.[8] Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[9] This protects web platforms from liability for the content that users post.

Further, Section 230(c)(2) states that “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[10] This is known as the “Good Samaritan” provision. Like 230(c)(1), Section 230(c)(2) gives internet providers liability protection, allowing them to moderate content in certain circumstances and then providing a safeguard from the free speech claims that would be made against them.[11]

The question is whether or not defendant social media platforms should be shielded from liability for platforming content that has allegedly led to or facilitated violent attacks. In Gonzales, the Justice department stated that although the company is protected against claims for hosting ISIS videos, a claim may be filed against Google for allowing Youtube to provide targeted recommendations of the videos.[12] And in Taamneh, the 9th Circuit agreed with the plaintiffs that there was room for the claim to go forward under the Anti-Terrorism Act because Twitter had generalized knowledge of the Islamic State’s use of their services.[13]

Section 230 has drawn an eclectic mix of critics and supporters. For example, although many conservatives and free speech advocates support the protections of Section 230, there have also been conservatives that oppose the code due to perceived restriction of conservative viewpoints on social media platforms. For example, prominent Republican Josh Hawley from Missouri has come out against the code, stating that the tech platforms ought to be treated as distributors and lose Section 230 protections.[14] In fact, Hawley introduced a piece of legislation opposing Section 230 called the Federal Big Tech Tort Act to impose liability on tech platforms.[15] And on the left, Section 230 is supported by those who believe the voices of the marginalized are protected by 230 and would otherwise be at the whim of tech companies, but opposed by people who fear that the code enables political violence and hate speech.[16]

The Supreme Court has now granted certiorari in both Gonzales and Taamneh. In Gonzales, the plaintiffs are arguing that Section 230 should not protect the actions of Google because the events occurred outside the US, it is preempted by the Justice Against Sponsors of Terrorism Act (JASTA), and the algorithmic recommendations transform Google / Youtube from an interactive computer service to an information content provider.[17] Google is arguing that they should be protected by 230, particularly 230(c)(1).[18] The 9th Circuit stated that although 230 did apply abroad, that JASTA shouldn’t supersede 230. Instead, 230 and JASTA should run parallel to each other. The 9th Circuit further stated that the claims based on revenue sharing (rather than ad targeting) should be dismissed. They did not think Google was contributing to terrorism, because they were motivated by financial enrichment rather than ideology, and affirmed the dismissal, partially because there was not clear enough information of how much support Google had provided to ISIS.[19] Future decisions regarding this case will implicate things like whether algorithmic recommendations should apply to 230.[20]

In Taamneh, the defendants argued that there was no proximate cause, as well as arguing about the inapplicability of Section 230.[21]  Unlike in GonzalesTaamneh had adequately stated a claim for aiding and abetting because the social media companies had more explicit knowledge of how their platforms were being used by these groups. The Taamneh dismissal was reversed. The Supreme Court review of this case will have implications on what it means to support or have a relationship with a group via a social media platform. In both of these cases, fears regarding the scope of 230 were expressed, which could reflect poorly on its applicability going forward.[24]   

Gonzales and Taamneh will hit the Supreme Court soon. If 230 is restricted, it would enable greater free speech but risks exposing more people to harms like hate speech or violence.  However, if 230 is preserved as is, it could restrict the accessibility and openness that has made the internet what it is today. Whichever decision is made, there will be massive implications for what the internet looks like in the future.

Notes

[1] https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/#:~:text=The%20number%20of%20social%20media,growth%20of%20%2B137%20million%20users.

[2] Id.

[3] https://apnews.com/article/islamic-state-group-us-supreme-court-technology-france-social-media-6bee9b5adf33dd15ee64b0d4d4e5ec78

[4] Id.

[5] Id.

[6] https://www.washingtonpost.com/politics/2023/01/03/2023-is-poised-be-landmark-year-tech-legal-bouts/

[7] https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996

[8] https://www.eff.org/issues/cda230

[9] https://casetext.com/statute/united-states-code/title-47-telecommunications/chapter-5-wire-or-radio-communication/subchapter-ii-common-carriers/part-i-common-carrier-regulation/section-230-protection-for-private-blocking-and-screening-of-offensive-material

[10] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[11] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[12] https://www.washingtonpost.com/politics/2022/12/09/tech-critics-urge-supreme-court-narrow-section-230/

[13] https://knightcolumbia.org/blog/twitter-v-taamneh-in-the-supreme-court-whats-at-stake

[14] Supa Washington Post

[15] https://www.hawley.senate.gov/hawley-files-gonzalez-v-google-amicus-brief-supreme-court-challenging-big-techs-section-230

[16] Supa Washington Post

[17] https://www.lawfareblog.com/supreme-court-grants-certiorari-gonzalez-v-google-and-twitter-v-taamneh-overview

[18] Id.

[19] Id.

[20]

[21] Id.

[22] Id.

[23] Id.

[24]Id.


Data Privacy Regulations in 2023: Is the New Standard Burdensome?

Yolanda Li, MJLST Staffer

Beginning in 2023, businesses will see enhanced regulations on data privacy. There has been an increase in legal requirements for company-held data in protection of companies’ customers as a number of proposed data security laws and regulations came into effect in 2023. Specifically, the FTC Safeguards Rule and the NIS2 Directive.

The FTC Safeguards Rule

The FTC Safeguards Rule came into force in December 2022. The FTC requires non-banking financial institutions “to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe.”[1] Non-banking financial institutions affected by this rule include mortgage brokers, motor vehicle dealers, and payday lenders. The Safeguards Rule is promulgated under the Gramm-Leach-Bliley Act of 1999, which requires financial institutions to “explain their information-sharing practices to their customers and to safeguard sensitive data.”[2] Financial institutions include companies that offer consumer financial products or services like loans, insurance, and financial or investment advice.[3] Specifically, the rule required that the covered financial institutions “designate a qualified individual to oversee their information security program, develop a written risk assessment, limit and monitor who can access sensitive customer information, encrypt all sensitive information, train security personnel, develop an incident response plan, periodically assess the security practices of service providers, and implement multi-factor authentication or another method with equivalent protection for any individual accessing customer information.”

One specific question that arises is whether the FTC Safeguards Rule will truly elevate data privacy standards. On its face the FTC Safeguards Rule does not run counter to the FTC’s mission of protecting consumers. However, the economic cost and effect behind the rule is debatable. One concern is that the rule may impose substantial costs, especially on small businesses, as the new burdens will render costs that may be unbearable for small businesses with less capital than large companies. According to Commissioner Christine S. Wilson, although financial institutions are already implementing many of the requirements under the rule, or have sophisticated programs that are easily adaptable to new obligations, there are still large burdens underestimated by the FTC Safeguards Rule.[4] Specifically, labor shortages have hampered efforts by financial institutions to implement information security systems. Supply chain issues caused delays in obtaining equipment for updating information systems. What is important to note is, according to Commissioner Wilson, most of these factors are outside the control of the financial institutions. Implementing a heightened standard would thus cause unfairness, especially to small financial institutions who have even more trouble obtaining the necessary equipment during times of supply chain and labor shortages.

Recognizing such difficulties, the FTC did offer a certain extent of leniency for implementation of the rule. Specifically, the FTC extended the deadline by six months, primarily due to supply chain issues that may result in delays and shortage of qualified personnel to implement information security programs. This extension is beneficial to the Rule because it offers the covered financial institutions time for adjustment and compliance.

Another concern that the FTC Safeguards Rule has raised is that the mandates will not result in a significant reduction in data security risks in protecting customers. The answer to this question is still uncertain as the FTC Safeguards Rule just came into effect, and the extension pushes out implementation even farther. One thing to note, however, is that during the rule-making process the FTC sought comments on the proposed Safeguards Rule and during that time extended the deadline for the public to submit comments to changes by 60 days in.[5] This fact may show that the FTC took careful consideration of how to most effectively reduce data security risks by giving the public ample time to weigh in.

NIS2 Directive

A corresponding law is the NIS2 Directive by the EU that came into force on January 16, 2023. This EU-wide legislation provides a variety of legal measures to boost cybersecurity. Specifically, it requires member states to be appropriately equipped with response and information systems, set up a Corporation Group to facilitate corporate exchange of information among member states, and ensure a culture of security that relies heavily on infrastructures, including financial market infrastructure.[6] The Directive also contains a variety of security and notification requirements for service providers to comply with. The NIS2 Directive echoes the FTC Safeguards Rule to a large extent regarding the elevated standard of cybersecurity measures.

However, the NIS2 Directive contains a different measure by implementing duties onto the European Union Agency for Cybersecurity (ENISA) itself. The Directive designates that ENISA assists Member States and the Corporation Groups set up under the Directive by “identifying good practices in the Member States regarding the implementation of the NIS directive, supporting the EU-wide reporting process for cybersecurity incidents, by developing thresholds, templates and tools, agreeing on common approaches and procedures, and helping Member States to address common cybersecurity issues.”[7] The Directive ordering the agency itself to facilitate the carrying out of the Directive may add to the likelihood of success. Although the outcome is uncertain, primarily because of the broad language of the Directive, at least burdens on financial institutions will be lessened to a certain extent. What distinguishes the NIS2 Directive from the FTC Safeguards Rule is that the Member States are given 21 months to transpose to their national legislative framework.[8] This time offers more flexibility as compared to the extension of the FTC Safeguards Rule. As the Directive passes through the legislative framework, more time will be allowed for financial institutions to prepare and respond to the proposed changes.

In summary, data privacy laws are tightening up globally, and the United States should look to and learn from the successes and failures of the EU’s Directive as both countries’ are attempting to do regulate a similar industry. That being said, regardless of the EU, financial institutions in the United States must begin paying attention to and complying with the FTC Safeguards Rule. Though the outcome of the Rule is uncertain, the 6-month extension will at least offer a certain degree of flexibility.

Notes

[1]https://www.ftc.gov/news-events/news/press-releases/2022/11/ftc-extends-deadline-six-months-compliance-some-changes-financial-data-security-rule; 16 CFR 314.

[2] https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act.

[3] Id.

[4] Concurring Statement of Commissioner Christine S. Wilson, Regarding Delaying the Effective Date of Certain Provisions of the Recently Amended Safeguards Rule (Nov 2022).

[5] https://www.ftc.gov/news-events/news/press-releases/2019/05/ftc-extends-comment-deadline-proposed-changes-safeguards-rule.

[6] https://digital-strategy.ec.europa.eu/en/policies/nis2-directive.

[7] https://www.enisa.europa.eu/topics/cybersecurity-policy/nis-directive-new#:~:text=On%2016%20January%202023%2C%20the,cyber%20crisis%20management%20structure%20(CyCLONe).

[8] Id.

 


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Saving the Planet With Admin Law: Another Blow to Tax Exceptionalism

Caroline Moriarty, MJLST Staffer

Earlier this month, the U.S. Tax Court struck down an administrative notice issued by the IRS regarding conservation easements in Green Valley Investors, LLC v. Commissioner. While the ruling itself may be minor, the court may be signaling a shift away from tax exceptionalism to administrative law under the Administrative Procedures Act (“APA”), which could have major implications for the way the IRS operates. In this post, I will explain what conservation easements are, what the ruling was, and what the ruling may mean for IRS administrative actions going forward. 

Conservation Easements

Conservation easements are used by wealthy taxpayers to get tax deductions. Under Section 170(h) of the Internal Revenue Code (“IRC”), taxpayers who purchase development rights for land, then donate those rights to a charitable organization that pledges not to develop or use the land, get a deduction proportional to the value of the land donated. The public gets the benefit of preserved land, which could be used as a park or nature reserve, and the donor gets a tax break.

However, this deduction led to the creation of “syndicated conservation easements.” In this tax scheme, intermediaries purchase vacant land worth little, hire an appraiser to declare its value to be much higher, then sell stakes in the donation of the land to investors, who get a tax deduction that is four to five times higher than what they paid. In exchange, the intermediaries are paid large fees. 

Conservation easements can be used to protect the environment, and proponents of the deduction argue that the easements are a critical tool in keeping land safe from development pressures. However, the IRS and other critics argue that these deductions are abused and cost the government between $1.3 billion and $2.4 billion in lost tax revenue. Some appraisers in these schemes have been indicted for “fraudulent” and “grossly inflated” land appraisals. Both Congress and the IRS have published research about the potential for abuse. In 2022, the IRS declared the schemes one of their “Dirty Dozen” for the year, writing that “these abusive arrangements do nothing more than game the tax system with grossly inflated tax deductions and generate high fees for promoters.”

Notice 2017-10 and the Tax Court’s Green Valley Ruling

To combat the abuse of conservation easements, the IRS released an administrative notice (the “Notice”) that required taxpayers to disclose any syndicated conservation easements on their tax returns as a “listed transaction.” The notice didn’t go through notice-and-comment procedures from the APA. Then, in 2019, the IRS disallowed over $22 million in charitable deductions on Green Valley and the other petitioners’ taxes for 2014 and 2015 and assessed a variety of penalties.  

While the substantive tax law is complex, Green Valley and the other petitioners challenged the penalties, arguing that the Notice justifying the penalties didn’t go through notice and comment procedures. In response, the IRS argued that Congress had exempted the agency from notice-and-comment procedures. Specifically, the IRS argued that they issued a Treasury Regulation that defined a “listed transaction” as one “identified by notice, regulation, or other form of published guidance,” which should have indicated to Congress that the IRS would be operating outside of APA requirements when issuing notices. 

The Tax Court disagreed, writing “We remain unconvinced that Congress expressly authorized the IRS to identify a syndicated conservation easement transaction as a listed transaction without the APA’s notice-and-comment procedures, as it did in Notice 2017-10.” Essentially, the statutes that Congress wrote allowing for IRS penalties did not determine the criteria for how taxpayers would incur the penalties, so the IRS decided with non-APA reviewed rules. If Congress would have expressly authorized the IRS to determine the requirements for penalties without APA procedures in the penalty statutes, then the Notice would have been valid. 

In invalidating the notice, the Tax Court decided that Notice 2017-10 was a legislative rule requiring notice-and-comment procedures because it imposed substantive reporting obligations on taxpayers with the threat of penalties. Since the decision, the IRS has issued proposed regulations on the same topic that will go through notice and comment procedures, while continuing to defend the validity of the Notice in other circuits (the Tax Court adopted reasoning from a Sixth Circuit decision).

The Future of Administrative Law and the IRS 

The decision follows other recent cases where courts have pushed the IRS to follow APA rules. However, following the APA is a departure from the past understanding of administrative law’s role in tax law. In the past, “tax exceptionalism” described the misperception that tax law is so complex and different from other regulatory regimes that the rules of administrative law don’t apply. This understanding has allowed the IRS to make multiple levels of regulatory guidance, some binding and some not, all without effective oversight from the courts. Further, judicial review is limited for IRS actions by statute, and even if there’s review, it may be ineffective if the judges are not tax experts. 

This movement towards administrative law has implications for both taxpayers and the IRS. For taxpayers, administrative law principles could provide additional avenues to challenge IRS actions and allow for more remedies. For the IRS, the APA may be an additional barrier to their job of collecting tax revenue. At the end of the day, syndicated conservation easements can be used to defraud the government, and the IRS should do something to curtail their potential for abuse. Following notice-and-comment procedures could delay effective tax administration. However, the IRS is an administrative agency, and it doesn’t make sense to think they can make their own rules or act like they’re not subject to the APA. Either way, administrative law will likely continue to prevail in both federal courts and Tax Court, and it will continue to influence tax law as we know it.