Data

“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Breaking the Tech Chain to Slow the Growth of Single-Family Rentals

Sarah Bauer, MJLST Staffer

For many of us looking to buy our first homes during the pandemic, the process has ranged from downright comical to disheartening. Here in Minnesota, the Twin Cities have the worst housing shortage in the nation, a problem that has both Republican and Democratic lawmakers searching for solutions to help both renters and buyers access affordable housing. People of color are particularly impacted by this shortage because the Twin Cities are also home to the largest racial homeownership gap in the nation

Although these issues have complex roots, tech companies and investors aren’t helping. The number of single-family rentals (SFR) units — single-family homes purchased by investors and rented out for profit — have risen since the great Recession and exploded over the course of the pandemic. In the Twin Cities, black neighborhoods have been particularly targeted by investors for this purpose. In 2021, 8% of the homes sold in the Twin Cities metro were purchased by investors, but investors purchased homes in BIPOC-majority zip codes at nearly double the rate of white-majority neighborhoods. Because property ownership is a vehicle for wealth-building, removing housing stock from the available pool essentially transfers the opportunity to build wealth from individual homeowners to investors who can both profit from rents as well as the increased value of the property at sale. 

It’s not illegal for tech companies and investors to purchase and rent out single-family homes. In certain circumstances, it may actually be desirable for them to be involved in the market. If you are a seller that needs to sell your home before buying a new one, house-flipping tech companies can get you out of your home faster by purchasing the home without a showing, an inspection, or contingencies. And investors purchasing single-family homes can provide a floor to the market during slowdowns like the Great Recession, a service which benefits homeowners as well as the investors themselves. But right now we have the opposite problem: not enough homes available for first-time owner-occupants. Assuming investor-ownership is becoming increasingly undesirable, what can we do about it? To address the problem, we need to understand how technology and investors are working in tandem to increase the number of single-family rentals.

 

The Role of House-Flipping Technology and iBuyers

The increase in SFRs is fueled by investors of all kinds: corporations, local companies, and wealthy individuals. For smaller players, recent developments in tech have made it easier for them to flip their properties. For example, a recent CityLab article discussed FlipOS, “a platform that helps investors prioritize repairs, access low-interest loans, and speed the selling process.” Real estate is a decentralized industry, and such platforms make the process of buying single-family homes and renting them out faster. Investors see this as a benefit to the community because rental units come onto the market faster than they otherwise would. But this technology also gives such investors a competitive advantage over would-be owner-occupiers.

The explosion of iBuying during the pandemic also hasn’t helped. iBuyers — short for “instant buyers” — use AI to generate automated valuation models to give the seller an all-cash, no contingency offer. This enables the seller to offload their property quickly, while the iBuyer repairs, markets, and re-sells the home. iBuyers are not the long-term investors that own SFRs, but the house-flippers that facilitate the transfer of property between long-term owners.

iBuyers like Redfin, Offerpad, Opendoor (and formerly Zillow) have increasingly purchased properties in this way over the course of the pandemic. This is true particularly in Sunbelt states, which have a lot of new construction of single-family homes that are easier to accurately price. As was apparent from the demise of Zillow’s iBuying program, these companies have struggled with profitability because home values can be difficult to predict. The aspects of real estate transactions that slow down traditional homebuyers (title check, inspections, etc…) also slow down iBuyers. So they can buy houses fast by offering all-cash offers with no inspection, but they can’t really offload them faster than another seller.

To the degree that iBuyers in the market are a problem, that problem is two-fold. First, they make it harder for first-time homeowners to purchase homes by offering cash and waiving inspections, something few first-time homebuyers can afford to offer. The second problem is a bigger one: iBuyers are buying and selling a lot of starter homes to large, non-local investors rather than back to owner-occupants or local landlords.

 

Transfer from Flippers to Corporate Investors

iBuyers as a group sell a lot of homes to corporate landlords, but it varies by company. After Zillow discontinued its iBuying program, Bloomberg reported that the company planned to offload 7,000 homes to real estate investment trusts (REITs). Offerpad sells 10-20% of its properties to institutional investors. Opendoor claims that it sells “the vast majority” of its properties to owner-occupiers. RedfinNow doesn’t sell to REITs at all. Despite the variation between companies, iBuyers on the whole sold one-fifth of their flips to institutional investors in 2021, with those sales more highly concentrated in neighborhoods of color. 

REITs allow firms to pool funds, buy bundles of properties, and convert them to SFRs. In addition to shrinking the pool of homes available for would-be owner-occupiers, REITs hire or own corporate entities to manage the properties. Management companies for REITs have increasingly come under fire for poor management, aggressively raising rent, and evictions. This is as true in the Twin Cities as elsewhere. Local and state governments do not always appear to be on the same page regarding enforcement of consumer and tenant protection laws. For example, while the Minnesota AG’s office filed a lawsuit against HavenBrook Homes, the city of Columbia Heights renewed rental occupancy licenses for the company. 

 

Discouraging iBuyers and REITs

If we agree as a policy matter that single-family homes should be owner-occupied, what are some ways to slowdown the transfer of properties and give traditional owner-occupants a fighting chance? The most obvious place to start is by considering a ban on iBuyers and investment firms from acquiring homes. The Los Angeles city council voted late last year to explore such a ban. Canada has voted to ban most foreigners from buying homes for two years to temper its hot real estate market, a move which will affect iBuyers and investors.

  Another option is to make flipping single-family homes less attractive for iBuyers. A state lawmaker from San Diego recently proposed Assembly Bill 1771, which would impose an additional 25% tax on the gain from a sale occurring within three years of a previous sale. This is a spin on the housing affordability wing of Bernie Sanders’s 2020 presidential campaign, which would have placed a 25% house-flipping tax on sellers of non-owner-occupied property, and a 2% empty homes tax on property of vacant, owned homes. But If iBuyers arguably provide a valuable service to sellers, then it may not make sense to attack iBuyers across the board. Instead, it may make more sense to limit or heavily tax sales from iBuyers to investment firms, or the opposite, reward iBuyers with a tax break for reselling homes to owner-occupants rather than to investment firms.

It is also possible to make investment in single-family homes less attractive to REITs. In addition to banning sales to foreign investors, the Liberal Party of Canada pitched an “excessive rent surplus” tax on post-renovation rent surges imposed by landlords. In addition to taxes, heavier regulation might be in order. Management companies for REITs can be regulated more heavily by local governments if the government can show a compelling interest reasonably related to accomplishing its housing goals. Whether REIT management companies are worse landlords than mom-and-pop operations is debatable, but the scale at which REITs operate should on its own make local governments think twice about whether it is a good idea to allow so much property to transfer to investors. 

Governments, neighborhood associations, and advocacy groups can also engage in homeowner education regarding the downsides of selling to an iBuyer or investor. Many sellers are hamstrung by needing to sell quickly or to the highest bidder, but others may have more options. Sellers know who they are selling their homes to, but they have no control over to whom that buyer ultimately resells. If they know that an iBuyer is likely to resell to an investor, or that an investor is going to turn their home into a rental property, they may elect not to sell their home to the iBuyer or investor. Education could go a long way for these homeowners. 

Lastly, governments themselves could do more. If they have the resources, they could create a variation on Edina’s Housing Preservation program, where homeowners sell their house to the City to preserve it as an affordable starter home. In a tech-oriented spin of that program, the local government could purchase the house to make sure it ends up in the hands of another owner-occupant, rather than an investor. Governments could decline to sell to iBuyers or investors single-family homes seized through tax forfeitures. Governments can also encourage more home-building by loosening zoning restrictions. More homes means a less competitive housing market, which REIT defenders say will make the single-family market less of an attractive investment vehicle. Given the competitive advantage of such entities, it seems unlikely that first-time homebuyers could be on equal footing with investors absent such disincentives.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


Social Media Influencers Ask What “Intellectual Property” Means

Henry Killen, MJLST Staffer

Today, just about anyone can name their favorite social media influencer. The most popular influencers are athletes, musicians, politicians, entrepreneurs, or models. Ultra-famous influencers, such as Kylie Jenner, can charge over 1 million dollars for a single post with a company’s product. So what are the risks of being an influencer? Tik Tok star Charli D’Amelio has been on both sides of intellectual property disputes. A photo of Charli was included in media mogul Sheeraz Hasan’s video promoting his ability to “make anyone famous.” The video featured many other celebrities such as Logan Paul and Zendaya. Charli’s legal team sent a cease-and-desist letter to Sheeraz demanding that her portion of the promotional video is scrubbed. Her lawyers assert that her presence in the promo “is not approved and will not be approved.” Charli has also been on the other side of celebrity intellectual property issues. The star published her first book In December and has come under fire from photographer Jake Doolittle for allegedly using photos he took without his permission. Though no lawsuit has been filed, Jake posted a series of Instagram posts blaming Charli’s team for not compensating him for his work.

Charli’s controversies highlight a bigger question society is facing, is content shared on social media platforms considered intellectual property? A good place to begin is figuring out what exactly intellectual property is. Intellectual property “refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names, and images used in commerce.” Social media platforms make it possible to access endless displays of content – from images to ideas – creating a cultural norm of sharing many aspects of life. Legal teams at the major social media platforms already have policies in place that make it against the rules to take images from a social media feed and use them as one’s own. For example, Bloggers may not be aware what they write may already by trademarked or copyrighted or that the images they get off the internet for their posts may not be freely reposted. Influencers get reposted on sites like Instagram all the time, and not just by loyal fans. These reposts may seem harmless to many influencers, but it is actually against Instagram’s policy to repost a photo without the creator’s consent. This may seem like not a big deal because what influencer doesn’t want more attention? However, sometimes influencers’ work gets taken and then becomes a sensation. A group of BIPOC TikTok users are fighting to copyright a dance they created that eventually became one of biggest dances in TikTok history. A key fact in their case is that the dance only became wildly popular after the most famous TiKTok users began doing it.

There are few examples of social media copyright issues being litigated, but in August 2021, a Manhattan Federal judge ruled that the practice of embedding social media posts on third-party websites, without permission from the content owner, could violate the owner’s copyright. In reaching this decision, the judge rejected the “server test” from the 9th Circuit, which holds that embedding content from a third party’s social media account only violates the contents owner’s copyright if a copy is stored on the defendant’s serves. .  General copyright laws from Congress lay out four considerations when deciding if a work should be granted copyright protection: originality, fixation, idea versus expression, and functionality. These considerations notably leave a gray area in determining if dances or expressions on social media sites can be copyrighted. Congress should enact a more comprehensive law to better address intellectual property as it relates to social media.


Whitelist for Thee, but Not for Me: Facebook File Scandals and Section 230 Solutions

Warren Sexson, MJLST Staffer

When I was in 7th grade, I convinced my parents to let me get my first social media account. Back in the stone age, that phrase was synonymous with Facebook. I never thought too much of how growing up in the digital age affected me, but looking back, it is easy to see the cultural red flags. It came as no surprise to me when, this fall, the Wall Street Journal broke what has been dubbed “The Facebook Files,” and in them found an internal study from the company showing Instagram is toxic to teen girls. While tragic, this conclusion is something many Gen-Zers and late-Millennials have known for years. However, in the “Facebook Files” there is another, perhaps even more jarring, finding: Facebook exempts many celebrities and elite influencers from its rules of conduct. This revelation demands a discussion of the legal troubles the company may find itself in and the proposed solutions to the “whitelisting” problem.

The Wall Street Journal’s reporting describes an internal process by Facebook called “whitelisting” in which the company “exempted high-profile users from some or all of its rules, according to company documents . . . .” This includes individuals from a wide range of industries and political viewpoints, from Soccer mega star Neymar, to Elizabeth Warren, and Donald Trump (prior to January 6th). The practice put the tech giant in legal jeopardy after a whistleblower, later identified as Frances Haugen, submitted a whistleblower complaint with the Securities and Exchange Commission (SEC) that Facebook has “violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors . . . .” See 17 CFR § 240.14a-9 (enforcement provision on false or misleading statements to investors). Mark Zuckerberg himself has made statements regarding Facebook’s neutral application of standards that are at direct odds with the Facebook Files. Regardless of the potential SEC investigation, the whitelist has opened up the conversation regarding the need for serious reform in the big tech arena to make sure no company can make lists of privileged users again. All of the potential solutions deal with 47 U.S.C. § 230, known colloquially as “section 230.”

Section 230 allows big tech companies to censor content while still being treated as a platform instead of a publisher (where they would incur liability for what is on their website). Specifically, § 230(c)(2)(A) provides that no “interactive computer service” shall be held liable for taking action in good faith to restrict “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable [content] . . . .” It is the last phrase, “otherwise objectionable,” that tech companies have used as justification for removing “hate speech” or “misinformation” from their platform without incurring publisher like liability. The desire to police such speech has led Facebook to develop stringent platform rules which has in turn created the need for whitelisting. This brings us to our first proposal, eliminating the phrase “otherwise objectionable” from section 230 itself. The proposed “Stop the Censorship Act of 2020” brought by Republican Paul Gosar of Arizona does just that. Proponents argue that it would force tech companies to be neutral or lose liability protections. Thus, no big tech company would ever create standards stringent enough to require a “whitelist” or an exempted class, because the standard is near to First Amendment protections—problem solved! However, the current governing majority has serious concerns about forced neutrality, which would ignore problems of misinformation or the mental health effects of social media in the aftermath of January 6th.

Elizabeth Warren, similar to a recent proposal in the House Judiciary Committee, takes a different approach: breaking up big tech. Warren proposes passing legislation to limit big tech companies in competing with small businesses who use the platform and reversing/blocking mergers, such as Facebook purchasing Instagram. Her plan doesn’t necessarily stop companies from having whitelists, but it does limit the power held by Facebook and others which could in turn, make them think twice before unevenly applying the rules. Furthermore, Warren has called for regulators to use “every tool in the toolbox,” in regard to Facebook.

Third, some have claimed that Google, Facebook, and Twitter have crossed the line under existing legal doctrines to become state actors. So, the argument goes, government cannot “induce” or “encourage” private persons to do what the government cannot. See Norwood v. Harrison, 413 U.S. 455, 465 (1973). Since some in Congress have warned big tech executives to restrict what they see as bad content, the government has essentially co-opted the hand of industry to block out constitutionally protected speech. See Railway Employee’s Department v. Hanson, 351 U.S. 225 (1956) (finding state action despite no actual mandate by the government for action). If the Supreme Court were to adopt this reasoning, Facebook may be forced to adopt a First Amendment centric approach since the current hate speech and misinformation rules would be state action; whitelists would no longer be needed since companies would be blocked from policing fringe content. Finally, the perfect solution! The Court can act where Congress cannot agree. I am skeptical of this approach—needless to say, such a monumental decision would completely shift the nature of social media. While Justice Thomas has hinted at his openness to this argument, it is unclear if the other justices will follow suit.

All in all, Congress and the Court have tools at their disposal to combat the disturbing actions taken by Facebook. Outside of potential SEC violations, Section 230 is a complicated but necessary issue Congress must confront in the coming months. “The Facebook Files” have exposed the need for systemic change in social media. What I once used to use to play Farmville, has become a machine that has rules for me, but not for thee.


What the SolarWinds Hack Means for the Future of Law Firm Cybersecurity?

Sam Sylvan, MJLST Staffer

Last December, the massive software company SolarWinds acknowledged that its popular IT-monitoring software, Orion, was hacked earlier in the year. The software was sold to thousands of SolarWinds’ clients, including government and Fortune 500 companies. A software update of Orion provided Russian-backed hackers with a backdoor into the internal systems of approximately 18,000 SolarWinds customers—a number that is likely to increase over time as more organizations discover that they also are victims of the hack. Even the cybersecurity company FireEye that first identified the hack had learned that its own systems were compromised.

The hack has widespread implications on the future of cybersecurity in the legal field. Courts and government attorneys were not able to avoid the Orion hack. The cybercriminals were able to hack into the DOJ’s internal systems, leading the agency to report that the hackers might have breached 3,450 DOJ email inboxes. The Administrative Office of the U.S. Courts is working with DHS to audit vulnerabilities in the CM/ECF system where highly sensitive non-public documents are filed under seal. Although, as of late February, no law firms had announced that they too were victims of the hack, likely because law firms do not typically use Orion software for their IT management, the Orion hack is a wakeup call to law firms across the country regarding their cybersecurity. There have been hacks, including hacks of law firms, but nothing of this magnitude or potential level of sabotage. Now more than ever law firms must contemplate and implement preventative measures and response plans.

Law firms of all sizes handle confidential and highly sensitive client documents and data. Oftentimes, firms have IT specialists but lack cybersecurity experts on the payroll—somebody internal who can aid by continuing to develop cybersecurity defenses. The SolarWinds hack shows why this needs to change, particularly for law firms that handle an exorbitant amount of highly confidential and sensitive client documents and can afford to add these experts to their ranks. Law firms relying exclusively on consultants or other third parties for cybersecurity only further jeopardizes the security of law firms’ document management systems and caches of electronically stored client documents. Indeed, it is reliance on third-party vendors that enabled the SolarWinds hack in the first place.

In addition to adding a specialist to the payroll, there are a number of other specific measures that law firms can take in order to address and bolster their cybersecurity defenses. For those of us who think it is not a matter of “if” but rather “when,” law firms should have an incident response plan ready to go. According to Jim Turner, chief operating officer of Hilltop Consultants, many law firms do not even have an incident response plan in place.

Further, because complacency and outdated IT software is of particular concern for law firms, “vendor vulnerability assessments” should become commonplace across all law firms. False senses of protection need to be discarded and constant reassessment should become the norm. Moreover, firms should upgrade the type of software protection they have in place to include endpoint detection and response (EDR), which uses AI to detect hacking activity on systems. Last, purchasing cyber insurance is a strong safety measure in the event a law firm has to respond to a breach. It would allow for the provision of additional resources needed to effectively respond to hacks.


Hacking the Circuit Split: Case Asks Supreme Court to Clarify the CFAA

Kate Averwater, MJLST Staffer

How far would you go to make sure your friend’s love interest isn’t an undercover cop? Would you run an easy search on your work computer? Unfortunately for Nathan Van Buren, his friend was part of an FBI sting operation and his conduct earned him a felony conviction under the Computer Fraud and Abuse Act (CFAA), 18 USC § 1030.

Van Buren, formerly a police sergeant in Georgia, was convicted of violating the CFAA. His acquaintance turned informant for the FBI and recorded their interactions. Van Buren knew Andrew Albo from Albo’s previous brushes with law enforcement. He asked Van Buren to run the license plate number of a dancer. Albo claimed he was interested in her and wanted to make sure she wasn’t an undercover cop. Trying to better his financial situation, Van Buren told Albo he needed money. Albo gave Van Buren a fake license plate number and $6,000. Van Buren then ran the fake number in the Georgia Crime Information Center (GCIC) database. Albo recorded their interactions and the trial court convicted Van Buren of honest-services wire fraud (18 USC §§ 1343, 1346) and felony computer fraud under the CFAA.

Van Buren appealed and the Eleventh Circuit vacated and remanded the honest-services wire fraud conviction but upheld the felony computer fraud conviction. His case is currently on petition for review before the Supreme Court.

The relevant portion of the CFAA criminalizes obtaining “information from any protected computer” by “intentionally access[ing] a computer without authorization or exceed[ing] authorized access.” Van Buren’s defense was that he had authorized access to the information. However, he admitted that he used it for an improper purpose. This disagreement over access restrictions versus use restrictions is the crux of the circuit split.  Van Buren’s petition emphasizes the need for the Supreme Court to resolve these discrepancies.

Most favorable to Van Buren is the Ninth Circuit’s reading of the CFAA. The court previously held that the CFAA did not criminalize abusing authorized access for impermissible purposes. Recently, the Ninth Circuit reaffirmed this interpretation. The Second and Fourth Circuits align with the Ninth in interpreting the CFAA narrowly, declining to criminalize conduct similar to Van Buren’s.

In affirming his conviction, the Eleventh Circuit rested on their previous decision in Rodriguez, a much broader reading of the CFAA. The First, Fifth, and Seventh Circuits join the Eleventh in interpreting the CFAA to include inappropriate use.

Van Buren’s case has sparked a bit of controversy and prompted multiple organizations to file amicus briefs. They are pushing the Supreme Court to interpret the CFAA in a narrow way that does not criminalize common activities. Broad readings of the CFAA lead to criticism of the law as “a tool ripe for abuse.”

Whether or not the Supreme Court agrees to hear the case, next time someone offers you $6,000 to do a quick search on your work computer, say no.


Forget About Quantum Computers Cracking Your Encrypted Data, Many Believe End-to-End Encryption Will Lose Out as a Matter of Policy

Ian Sannes, MJLST Staffer

As reported in Nature, Google recently announced they finally achieved quantum supremacy, which is the point when computers that work based on the spin of qubits, rather than how all conventional computers work, are finally able to solve problems faster than conventional computers. However, using quantum computers is not a threat to encryption any time soon according to John Preskill, who coined the term “quantum supremacy,” rather such theorized uses remain many years out. Furthermore, the question remains whether quantum computers are even a threat to encryption at all. IBM recently showcased one way to encrypt data that is immune to the theoretical cracking ability of future quantum computers. It seems that while one method of encryption is theoretically prone to attack by quantum computers, the industry will simply adopt methods that are not prone to such attacks when it needs to.

Does this mean that end-to-end encryption methods will always protect me?

Not necessarily. Stewart Baker opines there are many threats to encryption such as homeland security policy, foreign privacy laws, and content moderation, which he believes will win out over the right to have encrypted private data.

The highly-publicized efforts of the FBI in 2016 to try to force Apple to unlock encryption on an iPhone for national security reasons ended in the FBI dropping the case when they hired a third party who was able to crack the encryption. This may seem like a win for Silicon Valley’s historically pro-encryption stance but foreign laws, such as the UK’s Investigatory Powers Act, are opening the door for government power in obtaining user’s digital data.

In October of 2019 Attorney General Bill Barr requested that Facebook halt its plans to implement end-to-end encryption on its messaging services because it would prevent investigating serious crimes. Zuckerberg, the CEO of Facebook, admitted it would be more difficult to identify and remove harmful content if such an encryption was implemented, but has yet to implement the solution.

Some believe legislators may simply force software developers to create back doors to users’ data. Kalev Leetaru believes content moderation policy concerns will allow governments to bypass encryption completely by forcing device manufacturers or software companies to install client-side content-monitoring software that is capable of flagging suspicious content and sending decrypted versions to law enforcement automatically.

The trend seems to be headed in the direction of some governmental bypass of conventional encryption. However, just like IBM’s quantum-proof encryption was created to solve a weakness in encryption, consumers will likely find another way to encrypt their data if they feel there is a need.


Pacemakers, ICDs, and ICMs – Oh My! Implantable Heart Detection Devices

Janae Aune, MJLST Staffer

Heart attacks and heart disease kill hundreds of thousands of people in the United States every year. Heart disease affects every person differently based on their genetic and ethnic background, lifestyle, and family history. While some people are aware of their risk of heart problems, over 45 percent of sudden heart cardiac deaths occur outside of the hospital. With a condition as spontaneous as heart attacks, accurate information tracking and reporting is vital to effective treatment and prevention. As in any market, the market for heart monitoring devices is diverse, with new equipment arriving every year. The newest device in a long line of technology is the LINQ monitoring device. LINQ builds on and works with already established devices that have been used by the medical community.

Pacemakers were first used effectively in 1969 when lithium batteries were invented. These devices are surgically implanted under the skin of a patient’s chest and are meant to help control the heartbeat. These devices can be implanted for temporary or permanent use and are usually targeted at patients who experience bradycardia, a slow heart rate. These devices require consistent check-ins by a doctor, usually every three to six months. Pacemakers must also be replaced every 5 to 15 years depending on how long the battery life lasts. These devices revolutionized heart monitoring but involve significant risks with the surgery and potential device malfunctioning.

Implantable cardioverter defibrillators (ICD) are also surgically implanted devices but differ from pacemakers in that they deliver one shock when needed rather than continuous electrode shocks. ICDs are similar to the heart paddles doctors use when trying to stimulate a heart in the hospital – think yelling “charge” and the paddles they use. These devices are used mostly in patients with tachycardia, a heartbeat that is too fast. Implantation of an ICD requires feeding wires through the blood vessels of the heart. A subcutaneous ICD (S-ICD) has been newly developed and gives patients who have structural defects in their heart blood vessels another option of ICDs. Similar to pacemakers, an ICD monitors activity constantly, but will be read only at follow-up appointments with the doctor. ICDs last an average of seven years before the battery will need to be replaced.

The Reveal LINQ system is a newly developed heart monitoring device that records and transmits continuous information to a patient’s doctor at all times. The system requires surgical implantation of a small device known as the insertable cardiac monitor (ICM). The ICM works with another component called the patient monitor, which is a bedside monitor that transmits the continuous information collected by the ICM to a doctor instantly. A patient assistant control is also available which allows the patient to manually mark and record particular heart activities and transmit those in more detail. The LINQ system allows a doctor to track a patient’s heart activity remotely rather than requiring the patient to come in for the history to be examined. Continuous tracking and transmitting allow a patient’s doctor to more accurately examine heart activity and therefore create a more effective treatment approach.

With the development of wearable technology meant to track health information and transmit it to the wearer, the development of devices such as the LINQ system provide new opportunities for technologies to work together to promote better health practices. The Apple Watch series 4 included electrocardiogram monitoring that records heart activity and checks the reading for atrial fibrillation (AFB). This is the same heart activity pacemakers, ICDs, and the LINQ system are meant to monitor. The future capability of heart attack and disease detection and treatment could be massively impacted by the ability to monitor heart behavior in multiple different ways. Between the ability to shock your heart, continuously monitor and transmit information about it, and report to you when your heart rate may be experiencing abnormalities from a watch it seems as if a future of decreased heart problems could be a reality.

With all of these newly developed methods of continuous tracking, it begs the question of how all of that information is protected? Health and heart behavior, which is internal and out of your control, is as personal as information gets. Electronic monitoring and transmission of this data opens it up to cybersecurity targeting. Cybersecurity and data privacy issues with these devices have started to be addressed more fully, however the concerns differ depends on which implantable device a patient has. Vulnerabilities have been identified with ICD devices which would allow an unauthorized individual to access and potentially manipulate the device. Scholars have argued that efforts to decrease vulnerabilities should be focused on protecting the confidentiality, integrity, and availability of information transmitted by implantable devices. The FDA has indicated that the use of a home monitor system could decrease the potential vulnerabilities. As the benefits from heart monitors and heart data continue to grow, we need to be sure that our privacy protections grow with it.


Wearable, Shareable, Terrible? Wearable Technology and Data Protection

Alex Wolf, MJLST Staffer

You might consider the first wearable technology of the modern-day to be the Sony Walkman, which celebrates its 40th anniversary this year. After the invention of Bluetooth 1.0 in 2002, commercial competitors began to realize the vast promise that this emergent technology afforded. Fifteen years later, over 265 million wearable tech devices are sold annually. It looks to be a safe bet that this trend will continue.

A popular subset of wearable technology is the fitness tracker. The user attaches the device to themselves, usually on their wrist, and it records their movements. Lower-end trackers record basics like steps taken, distance walked or run, and calories burned, while the more sophisticated ones can track heart rate and sleep statistics (sometimes also featuring fun extras like Alexa support and entertainment app playback). And although this data could not replace the care and advice of a healthcare professional, there have been positive health results. Some people have learned of serious health problems only once they started wearing a fitness tracker. Other studies have found a correlation between wearing a FitBit and increased physical activity.

Wearable tech is not all good news, however; legal commentators and policymakers are worried about privacy compromises that result from personal data leaving the owner’s control. The Health Insurance Portability and Protection Act (HIPAA) was passed by Congress with the aim of providing legal protections for individuals’ health records and data if they are disclosed to third parties. But, generally speaking, wearable tech companies are not bound by HIPAA’s reach. The companies claim that no one else sees the data recorded on your device (with a few exceptions, like the user’s express written consent). But is this true?

A look at the modern American workplace can provide an answer. Employers are attempting to find new ways to manage health insurance costs as survey data shows that employees are frequently concerned with the healthcare plan that comes with their job. Some have responded by purchasing FitBits and other like devices for their employees’ use. Jawbone, a fitness device company on its way out, formed an “Up for Groups” plan specifically marketed towards employers who were seeking cheaper insurance rates for their employee coverage plans. The plan allows executives to access aggregate health data from wearable devices to help make cost-benefit determinations for which plan is the best choice.

Hearing the commentators’ and state elected representatives’ complaints, members of Congress have responded; Senators Amy Klobuchar and Lisa Murkowski introduced the “Protecting Personal Health Data Act” in June 2019. It would create a National Task Force on Health Data Protection, which would work to advise the Secretary of Health and Human Services (HHS) on creating practical minimum standards for biometric and health data. The bill is a recognition that HIPAA has serious shortcomings for digital health data privacy. As a 2018 HHS Committee Report noted, “A class of health records that can be subject to HIPAA or not subject to HIPAA is personal health records (PHRs) . . . PHRs not subject to HIPAA . . . [have] no other privacy rules.”  Dena Mendolsohn, a lawyer for Consumer Reports, remarked favorably that the bill is needed because the current framework is “out of date and incomplete.”

The Supreme Court has recognized privacy rights in cell-site location data, and a federal court recognized standing to sue for a group of plaintiffs whose personally identifiable information (PII) was hacked and uploaded onto the Dark Web. Many in the legal community are pushing for the High Court to offer clearer guidance to both tech consumers and corporations on the state of protection of health and other personal data, including private rights of action. Once there is a resolution on these procedural hurdles, we may see firmer judicial directives on an issue that compromises the protected interests of more and more people.