Discrimination

AI and Predictive Policing: Balancing Technological Innovation and Civil Liberties

Alexander Engemann, MJLST Staffer

To maximize their effectiveness, police agencies are constantly looking to use the most sophisticated preventative methods and technologies available. Predictive policing is one such technique that fuses data analysis, algorithms, and information technology to anticipate and prevent crime. This approach identifies patterns in data to anticipate when and where crime will occur, allowing agencies to take measures to prevent it.[1] Now, engulfed in an artificial intelligence (“AI”) revolution, law enforcement agencies are eager to take advantage of these developments to augment controversial predictive policing methods.[2]

In precincts that use predictive policing strategies, ample amounts of data are used to categorize citizens with basic demographic information.[3] Now, machine learning and AI tools are augmenting this data which, according to one source vendor, “identifies where and when crime is most likely to occur, enabling [law enforcement] to effectively allocate [their] resources to prevent crime.”[4]

Both predictive policing and AI have faced significant challenges concerning issues of equity and discrimination. In response to these concerns, the European Union has taken proactive steps promulgating sophisticated rules governing AI applications within its territory, continuing its tradition of leading in regulatory initiatives.[5] Dubbed the “Artificial Intelligence Act”, the Union clearly outlined its goal of promoting safe, non-discriminatory AI systems.[6]

Back home, we’ve failed to keep a similar legislative pace, even with certain institutions sounding the alarms.[7] Predictive policing methods have faced similar criticism. In an issue brief, the NAACP emphasized, “[j]urisdictions who use [Artificial Intelligence] argue it enhances public safety, but in reality, there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement.”[8] This technological and ideological marriage clearly poses discriminatory risks for law enforcement agencies in a nation where a black person is already exponentially more likely to be stopped without just cause as their white counterparts.[9]

Police agencies are bullish about the technology. Police Chief Magazine, the official publication of the International Association of Chiefs of Police,  paints these techniques in a more favorable light, stating, “[o]ne of the most promising applications of AI in law enforcement is predictive policing…Predictive policing empowers law enforcement to predict potential crime hotspots, ultimately aiding in crime prevention and public safety.[10] In this space, facial recognition software is gaining traction among law enforcement agencies as a powerful tool for identifying suspects and enhancing public safety. Clearview AI stresses their product, “[helps] law enforcement and governments in disrupting and solving crime.”[11]

Predictive policing methods enhanced by AI technology show no signs of slowing down.[12] The obvious advantages to these systems cannot be ignored, allowing agencies to better allocate resources and manage their staff. However, as law enforcement agencies adopt these technologies, it is important to remain vigilant in holding them accountable to any potential ethical implications and biases embedded within their systems. A comprehensive framework for accountability and transparency, similar to European Union guidelines  must be established to ensure deploying predictive policing and AI tools do not come at the expense of marginalized communities. [13]

 

Notes

[1] Andrew Guthrie Ferguson, Predictive Policing and Reasonable Suspicion, 62 Emory L.J. 259, 265-267 (2012)

[2] Eric M. Baker, I’ve got my AI on You: Artificial Intelligence in the Law Enforcement Domain, 47 (Mar. 2021) (Master’s thesis).

[3] Id. at 48.

[4] Id. at 49 (citing Walt L. Perry et al., Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations, RR-233-NIJ (Santa Monica, CA: RAND, 2013), 4, https://www.rand.org/content/dam/rand/ pubs/research_reports/RR200/RR233/RAND_RR233.pdf).

[5] Commission Regulation 2024/1689 or the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (Artificial Intelligence Act), 2024 O.J. (L 1689) 1.

[6] Lukas Arnold, How the European Union’s AI Act Provides Insufficient Protection Against Police Discrimination, Penn. J. L. & Soc. Change (May 14,2024), https://www.law.upenn.edu/live/news/16742-how-the-european-unions-ai-act-provides#_ftn1.

[7] See Margaret Hu, Algorithmic Jim Crow, 86 Fordham L. Rev. 633, 664 (2017),

https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5445&context=flr. (“Database screening and digital watchlisting systems, in fact, can serve as complementary and facially colorblind supplements to mass incarcerations systems. The purported colorblindness of mandatory sentencing… parallels the purported colorblindness of mandatory database screening and vetting systems”).

[8] NAACP, Issue Brief: The Use of Artificial Intelligence in Predictive policing, https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief (last visited Nov. 2, 2024).

[9] Will Douglas Heaven, Artificial Intelligence- Predictive policing algorithms are racist. They need to be dismantled, MIT Tech. Rev. (July 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ (citing OJJDP Statistical Briefing Book. Estimated number of arrests by offense and race, 2020. Available: https://ojjdp.ojp.gov/statistical-briefing-book/crime/faqs/ucr_table_2. Released on July 08, 2022).

[10] See The Police Chief, Int’l Ass’n of Chiefs of Police, https://www.policechiefmagazine.org (last visited Nov. 2, 2024);Brandon Epstein, James Emerson, and ChatGPT, “Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives,” Police Chief Online, April 3, 2024.

[11] Clearview AI, https://www.clearview.ai/ (last visited Nov. 3, 2024).

[12] But see Nicholas Ibarra, Santa Cruz Becomes First US City to Approve Ban on Predictive Policing, Santa Cruz Sentinel (June 23, 200) https://evidentchange.org/newsroom/news-of-interest/santa-cruz-becomes-first-us-city-approve-ban-predictive-policing/.

[13] See also Roy Maurer, New York City to Require Bias Audits of AI-Type HR Technology, Society of Human Resources Management (December 19, 2021), https://www.shrm.org/topics-tools/news/technology/new-york-city-to-require-bias-audits-ai-type-hr-technology.

 


Enriching and Undermining Justice: The Risks of Zoom Court

Matthew Prager, MJLST Staffer

In the spring of 2022, the United States shut down public spaces in response to the COVID-19 pandemic. The court system did not escape this process, seeing all jury trials paused in March 2022.[1] In this rapidly changing environment, courts scrambled to adjust using a slew of modern telecommunication and video conferencing systems to resume the various aspects of the courtroom system in the virtual world. Despite this radical upheaval to traditional courtroom structure, this new form of court appears here to stay.[2]

Much has been written about the benefits of telecommunication services like Zoom and similar software to the courtroom system.[3]  However, while Zoom court has been a boon to many, Zoom-style virtual court appearances also present legal challenges.[4] Some of these problems affect all courtroom participants, while others disproportionally affect highly vulnerable individuals’ ability to participate in the legal system.

Telecommunications, like all forms of technology, is vulnerable to malfunctions and ‘glitches’, and these glitches can have significant disadvantage on a party’s engagement with the legal system. In the most direct sense, glitches– be they video malfunction, audio or microphone failure, or unstable internet connections–can limit a party’s ability to hear and be heard by their attorneys, opposing parties or judge, ultimately compromising their legitimate participation in the legal process.[5]

But these glitches can have effects beyond affecting direct communications. One study found participants evaluated individuals suffering from connection issues as less likable.[6] Another study using mock jurors, found those shown a video on a broken VCR recommend higher prison terms than a group of mock jurors provided with a functional VCR.[7] In effect, technology can act as a third party in a courtroom, and when that third party misbehaves, frustrations can unjustly prejudice a party with deleterious consequences.

Even absent glitches, observing a person through a screen can have a negative impact on how that person is perceived.[8] Researchers have noted this issue even before the pandemic. Online bail hearings conducted by closed-circuit camera led to significantly higher bond amounts than those conducted in person.[9] Simply adjusting the camera angle can alter the perception of a witness in the eyes of the observer.[10]

These issues represent a universal problem for any party in the legal system, but they are especially impactful on the elderly population.[11] Senior citizens often lacks digital literacy with modern and emerging technologies, and may even find their first experience with these telecommunications systems is in a courtroom hearing– that is if they even have access to the necessary technology.[12] These issues can have extreme consequences, with one case of an elderly defendant violating their probation because they failed to navigate a faulty Zoom link.[13]  The elderly are especially vulnerable, as issues with technical literacy can be compounded by sensory difficulties. One party with bad eyesight found requiring communication through a screen functionally deprived him of any communication at all.[14]

While there has been some effort to return to the in-person court experience, the benefits of virtual trials are too significant to ignore.[15] Virtual court minimizes transportation costs, allows vulnerable parties to engage in the legal system from the safety and familiarity of their own home and simplifies the logistical tail of the courtroom process. These benefits are indisputable for many participants in the legal system. But these benefits are accompanied by drawbacks, and practicalities aside, the adverse and disproportionate impact on senior citizens in virtual courtrooms should be seen as a problem to solve and not simply endure.

Notes

[1] Debra Cassens Weiss, A slew of federal and state courts suspend trials or close for coronavirus threat, ABA JOURNAL (March 18, 2020) (https://www.abajournal.com/news/article/a-slew-of-federal-and-state-courts-jump-on-the-bandwagon-suspending-trials-for-coronavirus-threat)

[2] How Courts Embraced Technology, Met the Pandemic Challenge, and Revolutionized Their Operations, PEW, December 1, 2021 (https://www.pewtrusts.org/en/research-and-analysis/reports/2021/12/how-courts-embraced-technology-met-the-pandemic-challenge-and-revolutionized-their-operations).

[3] See Amy Petkovsek, A Virtual Path to Justice: Paving Smoother Roads to Courtroom Access, ABA (June 3, 2024) (https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/technology-and-the-law/a-virtual-path-to-justice) (finding that Zoom court: minimizes transportation costs for low-income, disabled or remote parties; allows parties to participate in court from a safe or trusted environment; minimizes disruptions for children who would otherwise miss entire days of school; protects undocumented individuals from the risk of deportation; diminishes courtroom reschedulings from parties lacking access to childcare or transportation and allows immune-compromised and other high health-risk parties to engage in the legal process without exposure to transmittable illnesses).

[4] Daniel Gielchinsky, Returning to Court in a Post-COVID Era: The Pros and Cons of a Virtual Court System, LAW.com (https://www.law.com/dailybusinessreview/2024/03/15/returning-to-court-in-a-post-covid-era-the-pros-and-cons-of-a-virtual-court-system/)

[5] Benefits & Disadvantages of Zoom Court Hearings, APPEL & MORSE, (https://www.appelmorse.com/blog/2020/july/benefits-disadvantages-of-zoom-court-hearings/) (last visited Oct. 7, 2024).

[6] Angela Chang, Zoom Trials as the New Normal: A Cautionary Tale, U. CHI. L. REV. (https://lawreview.uchicago.edu/online-archive/zoom-trials-new-normal-cautionary-tale) (“Participants in that study perceived their conversation partners as less friendly, less active and less cheerful when there were transmission delays. . . .compared to conversations without delays.”).

[7] Id.

[8]  Id. “Screen” interactions are remembered less vividly and obscure important nonverbal social cues.

[9] Id.

[10] Shannon Havener, Effects of Videoconferencing on Perception in the Courtroom (2014) (Ph.D. dissertation, Arizona State University).

[11] Virtual Justice? A National Study Analyzing the Transition to Remote Criminal Court, STANFORD CRIMINAL JUSTICE CENTER, Aug. 2021, at 78.

[12] Id. at 79 (describing how some parties lack access to phones, Wi-Fi or any methods of electronic communication).

[13] Ivan Villegas, Elderly Accused Violates Probation, VANGUARD NEWS GROUP (October 21, 2022) (https://davisvanguard.org/2022/10/elderly-accused-violates-probation-zoom-problems-defense-claims/)

[14] John Seasly, Challenges arise as the courtroom goes virtual, Injustice Watch (April 22, 2020) (https://www.injusticewatch.org/judges/court-administration/2020/challenges-arise-as-the-courtroom-goes-virtual/)

[15] Kara Berg, Leading Michigan judges call for return to in-person court proceedings (Oct. 2, 2024, 9:36:00 PM), (https://detroitnews.com/story/news/local/michigan/2024/10/02/leading-michigan-judges-call-for-return-to-in-person-court-proceedings/75484358007/#:~:text=Courts%20began%20heavily%20using%20Zoom,is%20determined%20by%20individual%20judges).


EJScreen: The Environmental Justice Tool That You Didn’t Know You Needed

Emma Ehrlich, Carlisle Ghirardini, MJLST Staffer

What is EJScreen?

EJScreen was developed by the Environmental Protection Agency (“EPA”) in 2010, 16 years after President Clinton’s Executive Order 12898 required federal agencies to begin keeping data regarding “environmental and human health risks borne by populations identified by race, national origin or income.” The program has been available to the public through the EPA’s website since 2015 and is a mapping tool that allows users to look at specific geographic locations and set overlays that show national percentiles for categories such as income, people of color, pollution, health disparities, etc. Though the EPA warns that EJScreen is simply a screening tool and has its limits, the EPA uses the program in “[i]nforming outreach and engagement practices, [i]mplementing aspects of …permitting, enforcement, [and] compliance, [d]eveloping retrospective reports of EPA work, [and] [e]nhancing geographically based initiatives.”

As the EPA warns on its website, EJScreen does not contain all pertinent information regarding environmental justice and other data should be collected when studying specific areas. However, EJScreen is still being improved and was updated to EJScreen 2.0 in 2022 to account for more data sets, including data on which areas lack access to food, broadband, and medical services, as well as health disparities such as asthma and life expectancy.

Current Uses

EJScreen software is now being used to evaluate the allocation of federal funding. In February of this year, the EPA announced that it will be allocating $1 billion of funding from President Biden’s Bipartisan Infrastructure Law to Superfund cleanup projects such as cleanups of sites containing retired mines, landfills, and processing and manufacturing plants. The EPA said that 60% of new projects are in locations that EJScreen indicated were subject to environmental justice concerns.

EJScreen is also used to evaluate permits. The EPA published its own guidance in August of 2022 to address environmental justice permitting procedures. The guidance encourages states and other recipients of financial assistance from the EPA to use EJScreen as a “starting point” when looking to see if a project whose permit is being considered may conflict with environmental justice goals. The EPA believes this will “make early discussions more meaningful and productive and add predictability and efficiency to the permitting process.” If an early EJScreen brings a project into question, the EPA instructs permitters to consider additional data before making a permitting decision.

Another use of EJScreen is in the review of Title VI Civil Rights Act Complaints. Using the authority provided by Title VI, the EPA has promulgated rules that prohibit any agency or group that is receiving federal funding from the EPA from functioning in a discriminatory way based on race, color, or national origin. The rules also enable people to submit Title VI complaints directly to the EPA when they believe a funding recipient is acting in a discriminatory manner. If it is warranted by the complaint, the EPA will conduct an investigation. Attorneys that have reviewed EPA response letters expressing its decision to conduct an investigation based on a complaint have noted that the EPA often cites EJScreen when explaining why they decided to move forward with an investigation.

In October of 2022, the EPA sent a “Letter of Concern” to the Louisiana Department of Environmental Quality (“LDEQ”) and the Louisiana Department of Health stating that an initial investigation suggests that the two departments have acted in ways that had “disparate adverse impacts on Black residents” when issuing air permits or informing the public of health risks. When discussing a nearby facility’s harmful health effects on residents, the EPA cites data from EJScreen in concluding that the facility is much more likely to have effects on black residents of Louisiana compared to non-black residents. The letter also touches on incorrect uses of EJScreen in saying that LDEQ’s conclusion that a proposed facility would not affect surrounding communities was misleading because the LDEQ used EJScreen to show that there were no residents within a mile of the proposed facility but ignored a school located only 1.02 miles away from the proposed location.

Firms such as Beveridge & Diamond have recognized the usefulness of this technology. They urge industry decision makers to use this free tool, and others similar to it, to preemptively consider environmental justice issues that their permits and projects may face when being reviewed by the EPA or local agencies.

Conclusion

In conclusion, EJScreen has the potential to be a useful tool, especially as the EPA continues to update it with data for additional demographics. However, users of the software should heed EPA’s warning that this is simply a screening tool. It is likely best used to rule out locations for certain projects, rather than be solely relied on for approving projects in certain locations, which requires more recent data to be collected.

Lastly, EJScreen is just one of many environmental justice screening tools being used and developed. Multiple states have been developing their own screening programs, and there is research showing that using state screening software may be more beneficial than national software. An environmental justice screening tool was also developed by the White House Council on Environmental Quality in 2022. Its Climate and Economic Justice Screening Tool is meant to assist the government in assigning federal funding to disadvantaged communities. The consensus seems to be that all available screening tools are helpful in at least some way and should be consulted by funding recipients and permit applicants in the early rounds of their decision making processes.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Breaking the Tech Chain to Slow the Growth of Single-Family Rentals

Sarah Bauer, MJLST Staffer

For many of us looking to buy our first homes during the pandemic, the process has ranged from downright comical to disheartening. Here in Minnesota, the Twin Cities have the worst housing shortage in the nation, a problem that has both Republican and Democratic lawmakers searching for solutions to help both renters and buyers access affordable housing. People of color are particularly impacted by this shortage because the Twin Cities are also home to the largest racial homeownership gap in the nation

Although these issues have complex roots, tech companies and investors aren’t helping. The number of single-family rentals (SFR) units — single-family homes purchased by investors and rented out for profit — have risen since the great Recession and exploded over the course of the pandemic. In the Twin Cities, black neighborhoods have been particularly targeted by investors for this purpose. In 2021, 8% of the homes sold in the Twin Cities metro were purchased by investors, but investors purchased homes in BIPOC-majority zip codes at nearly double the rate of white-majority neighborhoods. Because property ownership is a vehicle for wealth-building, removing housing stock from the available pool essentially transfers the opportunity to build wealth from individual homeowners to investors who can both profit from rents as well as the increased value of the property at sale. 

It’s not illegal for tech companies and investors to purchase and rent out single-family homes. In certain circumstances, it may actually be desirable for them to be involved in the market. If you are a seller that needs to sell your home before buying a new one, house-flipping tech companies can get you out of your home faster by purchasing the home without a showing, an inspection, or contingencies. And investors purchasing single-family homes can provide a floor to the market during slowdowns like the Great Recession, a service which benefits homeowners as well as the investors themselves. But right now we have the opposite problem: not enough homes available for first-time owner-occupants. Assuming investor-ownership is becoming increasingly undesirable, what can we do about it? To address the problem, we need to understand how technology and investors are working in tandem to increase the number of single-family rentals.

 

The Role of House-Flipping Technology and iBuyers

The increase in SFRs is fueled by investors of all kinds: corporations, local companies, and wealthy individuals. For smaller players, recent developments in tech have made it easier for them to flip their properties. For example, a recent CityLab article discussed FlipOS, “a platform that helps investors prioritize repairs, access low-interest loans, and speed the selling process.” Real estate is a decentralized industry, and such platforms make the process of buying single-family homes and renting them out faster. Investors see this as a benefit to the community because rental units come onto the market faster than they otherwise would. But this technology also gives such investors a competitive advantage over would-be owner-occupiers.

The explosion of iBuying during the pandemic also hasn’t helped. iBuyers — short for “instant buyers” — use AI to generate automated valuation models to give the seller an all-cash, no contingency offer. This enables the seller to offload their property quickly, while the iBuyer repairs, markets, and re-sells the home. iBuyers are not the long-term investors that own SFRs, but the house-flippers that facilitate the transfer of property between long-term owners.

iBuyers like Redfin, Offerpad, Opendoor (and formerly Zillow) have increasingly purchased properties in this way over the course of the pandemic. This is true particularly in Sunbelt states, which have a lot of new construction of single-family homes that are easier to accurately price. As was apparent from the demise of Zillow’s iBuying program, these companies have struggled with profitability because home values can be difficult to predict. The aspects of real estate transactions that slow down traditional homebuyers (title check, inspections, etc…) also slow down iBuyers. So they can buy houses fast by offering all-cash offers with no inspection, but they can’t really offload them faster than another seller.

To the degree that iBuyers in the market are a problem, that problem is two-fold. First, they make it harder for first-time homeowners to purchase homes by offering cash and waiving inspections, something few first-time homebuyers can afford to offer. The second problem is a bigger one: iBuyers are buying and selling a lot of starter homes to large, non-local investors rather than back to owner-occupants or local landlords.

 

Transfer from Flippers to Corporate Investors

iBuyers as a group sell a lot of homes to corporate landlords, but it varies by company. After Zillow discontinued its iBuying program, Bloomberg reported that the company planned to offload 7,000 homes to real estate investment trusts (REITs). Offerpad sells 10-20% of its properties to institutional investors. Opendoor claims that it sells “the vast majority” of its properties to owner-occupiers. RedfinNow doesn’t sell to REITs at all. Despite the variation between companies, iBuyers on the whole sold one-fifth of their flips to institutional investors in 2021, with those sales more highly concentrated in neighborhoods of color. 

REITs allow firms to pool funds, buy bundles of properties, and convert them to SFRs. In addition to shrinking the pool of homes available for would-be owner-occupiers, REITs hire or own corporate entities to manage the properties. Management companies for REITs have increasingly come under fire for poor management, aggressively raising rent, and evictions. This is as true in the Twin Cities as elsewhere. Local and state governments do not always appear to be on the same page regarding enforcement of consumer and tenant protection laws. For example, while the Minnesota AG’s office filed a lawsuit against HavenBrook Homes, the city of Columbia Heights renewed rental occupancy licenses for the company. 

 

Discouraging iBuyers and REITs

If we agree as a policy matter that single-family homes should be owner-occupied, what are some ways to slowdown the transfer of properties and give traditional owner-occupants a fighting chance? The most obvious place to start is by considering a ban on iBuyers and investment firms from acquiring homes. The Los Angeles city council voted late last year to explore such a ban. Canada has voted to ban most foreigners from buying homes for two years to temper its hot real estate market, a move which will affect iBuyers and investors.

  Another option is to make flipping single-family homes less attractive for iBuyers. A state lawmaker from San Diego recently proposed Assembly Bill 1771, which would impose an additional 25% tax on the gain from a sale occurring within three years of a previous sale. This is a spin on the housing affordability wing of Bernie Sanders’s 2020 presidential campaign, which would have placed a 25% house-flipping tax on sellers of non-owner-occupied property, and a 2% empty homes tax on property of vacant, owned homes. But If iBuyers arguably provide a valuable service to sellers, then it may not make sense to attack iBuyers across the board. Instead, it may make more sense to limit or heavily tax sales from iBuyers to investment firms, or the opposite, reward iBuyers with a tax break for reselling homes to owner-occupants rather than to investment firms.

It is also possible to make investment in single-family homes less attractive to REITs. In addition to banning sales to foreign investors, the Liberal Party of Canada pitched an “excessive rent surplus” tax on post-renovation rent surges imposed by landlords. In addition to taxes, heavier regulation might be in order. Management companies for REITs can be regulated more heavily by local governments if the government can show a compelling interest reasonably related to accomplishing its housing goals. Whether REIT management companies are worse landlords than mom-and-pop operations is debatable, but the scale at which REITs operate should on its own make local governments think twice about whether it is a good idea to allow so much property to transfer to investors. 

Governments, neighborhood associations, and advocacy groups can also engage in homeowner education regarding the downsides of selling to an iBuyer or investor. Many sellers are hamstrung by needing to sell quickly or to the highest bidder, but others may have more options. Sellers know who they are selling their homes to, but they have no control over to whom that buyer ultimately resells. If they know that an iBuyer is likely to resell to an investor, or that an investor is going to turn their home into a rental property, they may elect not to sell their home to the iBuyer or investor. Education could go a long way for these homeowners. 

Lastly, governments themselves could do more. If they have the resources, they could create a variation on Edina’s Housing Preservation program, where homeowners sell their house to the City to preserve it as an affordable starter home. In a tech-oriented spin of that program, the local government could purchase the house to make sure it ends up in the hands of another owner-occupant, rather than an investor. Governments could decline to sell to iBuyers or investors single-family homes seized through tax forfeitures. Governments can also encourage more home-building by loosening zoning restrictions. More homes means a less competitive housing market, which REIT defenders say will make the single-family market less of an attractive investment vehicle. Given the competitive advantage of such entities, it seems unlikely that first-time homebuyers could be on equal footing with investors absent such disincentives.


Xenotransplantation: Ethics and Public Policy Need to Catch Up to the Science

Claire Colby, MJLST Staffer

In early January, surgeons at the University of Maryland Medical Center made history by successfully transplanting a genetically altered pig heart to a human recipient, David Bennett.  The achievement represents a major milestone in transplantation. The demand for transplantable organs far outpaces the supply, and xenotransplantation–the implantation of non-human tissue into human recipients–could help bridge this gap. In the U.S. alone, more than 106,000 people are on the waiting list for transplants. Legal and ethical questions remain open about the appropriateness of implementing xenotransplants on a large scale. 

The FDA approved the January transplant through an emergency authorization compassionate use pathway because Bennett likely would have died without this intervention. Larger clinical trials will be needed to generate enough data to show that xenotransplants are safe and effective. The FDA will require these trials to show xenotransplantations are non-inferior to human organ transplants. IRB requirements bar interventions where risk outweighs benefits for patients, but accurately predicting and measuring risk is difficult. 

If xenotransplantation becomes standard clinical practice, animal rights proponents may balk at the idea of raising pigs for organs. Far before that point, pre-clinical trials will make heavy use of animal models. Institutional Animal Care and Use Committees (IACUCs) which oversee animal research in universities and medical entities apply a “much lower ethical standard” for animals than human research subjects. Bioethicists apply a “3R” framework for animal subjects research that stresses replacing animal models, reducing animal testing, and refining their use. Because of the inherent nature of xenotransplantation, applying this framework may be near impossible. Ongoing discussions are needed with relevant stakeholders.  

If both human and animal organs are approved for widespread transplant, but human organs prove superior, new allocation policies are needed to determine who gets what. Organ allocation policy is currently dictated by the Organ Procurement and Transplantation Network (OPTN). As it stands, organ transplantation shows inequality across racial groups and financial status. New allocation policies for organs must not reinforce or worsen these disparities. 

Like all medical interventions, patients must be able to provide informed consent for xenotransplantation. The recipient of the altered pig heart had previously been deemed ineligible for a human heart transplant because his heart failure was poorly managed. Reserving experimental interventions, like xenotransplantations, for the sickest patients raises serious ethical concerns. Are these desperate patients truly able to give meaningful consent? If xenotransplantation becomes a common practice, the traditional model of institutional review boards may need updating. Currently, individual institutions maintain their own IRBs. Xenotransplantation of altered animal organs may involve several sites: procurement of the organ, genetic editing, and transplantation may all take place in different locations. A central IRB for xenotransplantation could standardize and streamline this process. 

In all, xenotransplantation represents an exciting new frontier in transplant medicine. Responsibly implementing this innovation will require foresight and parallel innovation in ethics and public policy. 


Holy Crap: The First Amendment, Septic Systems, and the Strict Scrutiny Standard in Land Use Law

Sarah Bauer, MJLST Staffer

In the Summer of 2021, the U.S. Supreme Court released a bevy of decisions favoring religious freedom. Among these was Mast v. City of Fillmore, a case about, well, septic systems and the First Amendment. But Mast is about so much more than that: it showcases the Court’s commitment to free exercise in a variety of contexts and Justice Gorsuch as a champion of Western sensibilities. It also demonstrates that moving forward, the government is going to need work harder to support that its compelling interest in land use regulation trumps an individual’s free exercise rights.

The Facts of Mast

To understand how septic systems and the First Amendment can even exist in the same sentence, it’s important to know the facts of Mast. In the state of Minnesota, the Pollution Control Agency (MPCA) is responsible for maintaining water quality. It promulgates regulations accordingly, then local governments adopt those regulations into ordinances. Among those are prescriptive regulations about wastewater treatment. At issue is one such ordinance adopted by Fillmore County, Minnesota, that requires most homes to have a modern septic system for the disposal of gray water.

The plaintiffs in the case are Swartzentruber Amish. They sought a religious exemption from the ordinance, saying that their religion forbade the use of that technology. The MPCA instead demanded the installation of the modern system under threat of criminal penalty, civil fines, and eviction from their farms. When the MPCA rejected a low-tech alternative offered by the plaintiffs, a mulch basin system not uncommon in other states, the Amish sought relief on grounds that the ordinance violated the Religious Land Use and Institutionalized Persons Act (RLUIPA). After losing the battle in state courts, the Mast plaintiffs took it to the Supreme Court, where the case was decided in their favor last summer.

The First Amendment and Strict Scrutiny

Mast’s issue is a land use remix of Fulton v. City of Philadelphia, another free exercise case from the same docket. Fulton, the more controversial and well-known of the two, involved the City of Philadelphia’s decision to discontinue contracts with Catholic Social Services (CSS) for placement of children in foster homes. The City said that CSS’s refusal to place children with same-sex couples violated a non-discrimination provision in both the contract and the non-discrimination requirements of the citywide Fair Practices Ordinance. The Supreme Court didn’t buy it, holding instead that the City’s policy impermissibly burdened CSS’s free exercise of religion.

The Fulton decision was important for refining the legal analysis and standards when a law burdens free exercise of religion. First, if a law incidentally burdens religion but is both 1) neutral and 2) generally applicable, then courts will not ordinarily apply a strict scrutiny standard on review. If one of those elements is not met, courts will apply strict scrutiny, and the government will need to show that the law 1) advances a compelling interest and 2) is narrowly tailored to achieve those interests. The trick to strict scrutiny is this: the government’s compelling interest in denying an exception needs to apply specifically to those requesting the religious exception. A law examined under strict scrutiny will not survive if the State only asserts that it has a compelling interest in enforcing its laws generally.

Strict Scrutiny, RLUIPA, and Mast

The Mast Plaintiffs sought relief under RLUIPA. RLUIPA isn’t just a contender for Congress’s “Most Difficult to Pronounce Acronym” Award. It’s a choice legal weapon for those claiming that a land use regulation restricts free exercise of religion. The strict scrutiny standard is built into RLUIPA, meaning that courts skip straight to the question of whether 1) the government had a compelling government interest, and 2) whether the rule was the least restrictive means of furthering that compelling government interest. And now, post-Fulton, that first inquiry involves looking at whether the government had a compelling interest in denying an exception specifically as it applies to plaintiffs.

So that is how we end up with septic systems and the First Amendment in the same case. The Amish sued under RLUIPA, the Court applied strict scrutiny, and the government failed to show that it had a compelling interest in denying the Amish an exception to the rule that they needed to install a septic system for their gray water. Particularly convincing at least from Coloradan Justice Gorsuch’s perspective, were the facts that 1) Minnesota law allowed exemptions to campers and outdoorsman, 2) other jurisdictions allowed for gray water disposal in the same alternative manner suggested by the plaintiffs, and 3) the government couldn’t show that the alternative method wouldn’t effectively filter the water.

So what does this ultimately mean for land use regulation? It means that in the niche area of RLUIPA litigation, religious groups have a stronger strict scrutiny standard to lean on, forcing governments to present more evidence justifying a refusal to extend religious exemptions. And government can’t bypass the standard by making regulations more “generally applicable,” for example by removing exemptions for campers. Strict scrutiny still applies under RLUIPA, and governments are stuck with it, resulting in a possible windfall of exceptions for the religious.


Tinder Shows Discrimination Can Take All Shapes in the Internet Age

Caleb Holtz, MJLST Staffer

On January 20th Tinder Inc., the company responsible for the popular dating mobile app, filed a proposed settlement agreement worth over $17 million. The settlement seeks to settle claims that Tinder charged older users more to use the app solely because of their age. Interestingly, while many people think of age discrimination against a group for being too old as being solely the concern of AARP members, this discrimination was against people over the age of 29. This is because of the relatively low threshold in California as to what can constitute age discrimination under California civil rights and consumer protection laws.

Discrimination is incredibly common in the Internet age, at least partially because it is so easy to do. Internet users develop a digital “fingerprint” over time and usage which follows them from website to website. Data contained within a digital “fingerprint” can contain information from “websites you visit, social platforms you use, searches you perform, and content you consume.” Digital fingerprinting is becoming even more common, as enterprising trackers have discovered a way to track users across multiple different browsing applications. When this information is combined with data users willfully give out on the internet, such as personal data on Facebook or Tinder, it is incredibly easy for companies to create a profile of all of the users relevant characteristics. From there it is easy to choose on what grounds to distinguish, or discriminate, users.

Discrimination in this manner is not always necessarily bad. On the most positive end of the spectrum, institutions like banks can use the information to discern if the wrong person is trying to access an account, based on the person’s digital fingerprint. More commonly, internet companies use the data to discriminate against users, controlling what they see and the price they are offered. A quintessential example of this practice was the study that found travel websites show higher prices to Mac users than PC users. Advocates of the practice argue that it allows companies to customize the user experience on an individual basis, allowing the user to see only what they want to see. They also say that it allows businesses to maximize efficiency, both in terms of maximizing profits and in terms of catering to the customer flow, which would therefore lead to a better user experience in the long run. To this point, the argument in favor of continuing this practice has generally won out, as it remains generally legal in the United States.

Opponents of the practice however say the costs outweigh the benefits. Many people, when shown just how much personal data follows them around the internet, will find the practice “creepy”. Opponents hope they can spread this general sentiment by showing more people just how much of their data is online without their explicit consent. This idea has support because, “despite its widespread use, price discrimination is largely happening without the knowledge of the general public, whose generally negative opinion of the practice has yet to be heard.”

More serious opponents move past the “creepiness” and into the legal and ethical issues that can pop up. As the Tinder case demonstrates, online discrimination can take an illegal form, violating state or federal law. Discrimination can also be much more malicious, allowing for companies looking for new employees to choose who even sees the job opening, based on factors like race, age, or gender. As Gillian B. White recently summarized nicely, “while it’s perfectly legal to advertise men’s clothing only to men, it’s completely illegal to advertise most jobs exclusively to that same group.” Now, as the Tinder case demonstrates, in certain scenarios it may be illegal to discriminate in pricing as well as job searches.

So what can be done about this, from a legal perspective? Currently in the United States the main price discrimination laws, the Sherman Antitrust Act, the Clayton Act, and the Robinson-Patman Act were created long before the advent of the internet, and allow for price discrimination as long as there is a “good faith reason”. (Part of the trouble Tinder ran into in litigation is a judge’s finding that there was not a good faith reason to discriminate as they were). There are also a plethora of discrimination in hiring laws which make certain discrimination by hiring employers illegal. Therefore the best current option may be for internet watchdog groups to keep a keen eye out for these practices and report what they come across.

As far as how the law can be changed, an interesting option exists elsewhere in the world. European Union data privacy laws may soon make some price discrimination illegal, or at the very least, significantly more transparent so users are aware of how their data is being used. Perhaps by similarly shining sunlight on the issue here in the states, consumers will begin forcing companies to change their practices.