Data

The Federal Government Wants Your iPhone Passcode: What Does the Law Say?

Tim Joyce, MJLST Staffer

Three months ago, when MJLST Editor Steven Groschen laid out the arguments for and against a proposed New York State law that would require “manufacturers and operating system designers to create backdoors into encrypted cellphones,” the government hadn’t even filed its motion to compel against Apple. Now, just a few weeks after the government quietly stopped pressing the issue, it almost seems as if nothing at all has changed. But, while the dispute at bar may have been rendered moot, it’s obvious that the fight over the proper extent of data privacy rights continues to simmer just below the surface.

For those unfamiliar with the controversy, what follows are the high-level bullet points. Armed attackers opened fire on a group of government employees in San Bernardino, CA on the morning of December 2, 2015. The attackers fled the scene, but were killed in a shootout with police later that afternoon. Investigators opened a terrorism investigation, which eventually led to a locked iPhone 5c. When investigators failed to unlock the phone, they sought Apple’s help, first politely, and then more forcefully via California and Federal courts.

The request was for Apple to create an authenticated version of its iOS operating system which would enable the FBI to access the stored data on the phone. In essence, the government asked Apple to create a universal hack for any iPhone operating that particular version of iOS. As might be predicted, Apple was less than inclined to help crack its own encryption software. CEO Tim Cook ran up the banner of digital privacy rights, and re-ignited a heated debate over the proper scope of government’s ability to regulate encryption practices.

Legal chest-pounding ensued.

That was the situation until March 28, when the government quietly stopped pursuing this part of the investigation. In its own words, the government informed the court that it “…ha[d] now successfully accessed the data stored on [the gunman]’s iPhone and therefore no longer require[d] the assistance from Apple Inc…”. Apparently, some independent governmental contractor (read: legalized hacker) had done in just a few days what the government had been claiming from the start was impossible without Apple’s help. Mission accomplished – so, the end?

Hardly.

While this one incident, for this one iPhone (the iOS version is only applicable to iPhone 5c’s, not any other model like the iPhone 6), may be history, many more of the same or substantially similar disputes are still trickling through the courts nationwide. In fact, more than ten other federal iPhone cases have been filed since September 2015, and all this based on a 227 year old act of last resort. States like New York are also getting into the mix, even absent fully ratified legislation. Furthermore, it’s obvious that legislatures are taking this issue seriously (see NYS’s proposed bill, recently returned to committee).

Although he is only ⅔ a lawyer at this point, it seems to this author that there are at least three ways a court could handle a demand like this, if the case were allowed to go to the merits.

  1. Never OK to demand a hack – In this situation, the courts could find that our collective societal interests in privacy would always preclude enforcement of an order like this. Seems unlikely, especially given the demonstrated willingness in this case of a court to make the order in the first place.
  2. Always OK to demand a hack – Similar to option 1, this option seems unlikely as well, especially given the First and Fourth Amendments. Here, the courts would have to find some rationale to justify hacking in every circumstance. Clearly, the United States has not yet transitioned to Orwellian dystopia yet.
  3. Sometimes OK to demand a hack, but scrutiny – Here, in the middle, is where it seems likely we’ll find courts in the coming years. Obviously, convincing arguments exist on each side, and it seems possible reconcile infringing personal privacy and upholding national security with burdening a tech company’s policy of privacy protection, given the right set of facts. The San Bernardino shooting is not that case, though. The alleged terrorist threat has not been characterized as sufficiently imminent, and the FBI even admitted that cracking the cell phone was not integral to the case and they didn’t find anything anyway. It will take a (probably) much more scary scenario for this option to snap into focus as a workable compromise.

We’re left then with a nagging feeling that this isn’t the last public skirmish we’ll see between Apple and the “man.” As digital technology becomes ever more integrated into daily life, our legal landscape will have to evolve as well.
Interested in continuing the conversation? Leave a comment below. Just remember – if you do so on an iPhone 5c, draft at your own risk.


Requiring Backdoors Into Encrypted Cellphones

Steven Groschen, MJLST Managing Editor

The New York State Senate is considering a bill that requires manufacturers and operating system designers to create backdoors into encrypted cellphones. Under the current draft, failure to comply with the law would result in a $2,500 fine, per offending device. This bill highlights the larger national debate concerning privacy rights and encryption.

In November of 2015, the Manhattan District Attorney’s Office (MDAO) published a report advocating for a federal statute requiring backdoors into encrypted devices. One of MDAO’s primary reasons in support of the statute is the lack of alternatives available to law enforcement for accessing encrypted devices. The MDAO notes that traditional investigative techniques have largely been ineffective. Additionally, the MDAO argues that certain types of data residing on encrypted devices often cannot be found elsewhere, such as on a cloud service. Naturally, the inaccessibility of this data is a significant hindrance to law enforcement. The report offers an excellent summary of the law enforcement perspective; however, as with all debates, there is another perspective.

The American Civil Liberties Union (ACLU) has stated it opposes using warrants to force device manufacturers to unlock their customers’ encrypted devices. A recent ACLU blog post presented arguments against this practice. First, the ACLU argued that the government should not require “extraordinary assistance from a third party that does not actually possess the information.” The ACLU perceives these warrants as conscripting Apple (and other manufacturers) to conduct surveillance on behalf of the government. Second, the ACLU argued using search warrants bypasses a “vigorous public debate” regarding the appropriateness of the government having backdoors into cellphones. Presumably, the ACLU is less opposed to laws such as that proposed in the New York Senate, because that process involves an open public debate rather than warrants.

Irrespective of whether the New York Senate bill passes, the debate over government access to its citizens’ encrypted devices is sure to continue. Citizens will have to balance public safety considerations against individual privacy rights—a tradeoff as old as government itself.


Circumventing EPA Regulations Through Computer Programs

Ted Harrington, MJLST Staffer

In September of 2015, it was Volkswagen Group (VW). This December, it was the General Electric Company (GE) finalizing a settlement in the United States District Court in Albany. The use of computer programs or other technology to override, or “cheat,” some type of Environmental Protection Agency (EPA) regulation has become seemingly commonplace.

GE uses silicone as part of its manufacturing process, which results in volatile organic compounds and chlorinated hydrocarbons, both hazardous byproducts. The disposal of hazardous materials is closely regulated by the Resource Conservation and Recovery Act (RCRA). Under this act, the EPA has delegated permitting authority to the New York State Department of Environmental Conservation (DEC). This permitting authority allows the DEC to grant permits for the disposal of hazardous wastes in the form of an NYS Part 373 Permit.

The permit allowed GE to store hazardous waste, operate a landfill, and use two incinerators on-site at its Waterford, NY plant. The permit was originally issued in 1989, and was renewed in 1999. The two incinerators included an “automatic waste feed cutoff system” designed to keep the GE facility in compliance with RCRA and the NYS Part 373 Permit. If the incinerator reached a certain limit, the cutoff system would simply stop feeding more waste.

Between September 2006 and February 2007, the cutoff system was overridden by computer technology, or manually by GE employees, on nearly 2,000 occasions. This resulted in hazardous waste being disposed of in amounts grossly above the limits of the issued permits. In early December, GE quickly settled the claim by paying $2.25 million in civil penalties.

Beyond the extra pollution caused by GE, a broader problem is emerging—in an increasingly technological world, what can be done to prevent companies from skirting regulations using savvy computer programs? With more opportunities than ever to get around regulation using technology, is it even feasible to monitor these companies? It is virtually certain that similar instances will continue to surface, and agencies such as the EPA must be on the forefront of developing preventative technology to slow this trend.


Warrant Now Required for One Type of Federal Surveillance, and May Soon Follow for State Law Enforcement

Steven Graziano, MJLST Staffer

As technology has advanced over the recent decades, law enforcement agencies have expanded their enforcement techniques. One example of these tools is cell-site simulators, otherwise known as sting rays. Put simply, sting rays act as a mock cell tower, detect the use of a specific phone number in a given range, and then uses triangulation to locate the phone. However, the recent, heightened awareness and criticism directed towards government and law enforcement surveillance has affected their potential use. Specifically, many federal law enforcement agencies have been barred from their use without a warrant, and there is current federal legislation pending, which would require state and local law enforcement agents to also gain a warrant before using a sting ray.

Federal law enforcement agencies, specifically Immigration, Secret Service, and Homeland Security agents must obtain search warrants before using sting rays, as announced by the Department of Homeland Security. Homeland Security’s shift in policy comes after the Department of Justice made a similar statement. The DOJ has affirmed that although they had previously used cell-cite simulators without a warrant, going forward they will require law enforcement agencies gain a search warrant supported by probable cause. DOJ agencies directed by this policy include the FBI and the Drug Enforcement Administration. This shift in federal policy was largely in response to pressures put upon Washington by civil liberties groups, as well as the shift in American public’s attitude towards surveillance generally.

Although these policies only affect federal law enforcement agencies, there have also been steps taken to expand the warrant requirement for sting rays to state and local governments. Federal lawmakers have introduced the Cell-Site Simulator Act of 2015, also known as the Stingray Privacy Act, to hold state and local law enforcement to the same Fourth Amendment standards as the federal government. The law has been proposed in the House of Representatives by Rep. Jason Chaffetz (R-Utah) and was designated to a congressional committee on November 2, 2015, which will consider it before sending it to the entire House or Senate. In addition to requiring a warrant, the act also requires prosecutors and investigators to disclose to judges that the technology they intend to use in execution of the warrant is specifically a sting ray. The proposed law was partially a response to a critique of the federal warrant requirement, name that it did not compel state or local law enforcement to also obtain a search warrant.

The use of advanced surveillance programs by federal, state, and local law enforcement, has been a controversial subject recently. Although law enforcement has a duty to fully enforce that law, and this includes using the entirety of its resources to detect possible crimes, it must still adhere to the constitutional protections laid out in the Fourth Amendment when doing so. Technology chances and advances rapidly, and sometimes it takes the law some time to adapt. However, the shift in policy at all levels of government, shows that the law may be beginning to catch up to law enforcement’s use of technology.


Digital Millennium Copyright Act Exemptions Announced

Zach Berger, MJLST Staffer

The Digital Millennium Copyright Act (DMCA) first enacted in 1998, prevents owners of digital devices from making use of these devices in any way that the copyright holder does not explicitly permit. Codified in part in 17 U.S.C. § 1201, the DMCA makes it illegal to circumvent digital security measures that prevent unauthorized access to copyrighted works such has movies, video games, and computer programs. This law prevents users from breaking what is known as access controls, even if the purpose would fall under lawful fair use. According to the Electronic Frontier Foundation’s (a nonprofit digital rights organization) staff attorney Kit Walsh, “This ‘access control’ rule is supposed to protect against unlawful copying. But as we’ve seen in the recent Volkswagen scandal . . . it can be used instead to hide wrongdoing hidden in computer code.” Essentially, everything not explicitly permitted is forbidden.

However, these restrictions are not iron clad. Every three years, users are allowed to request exemptions to this law for lawful fair uses from the Library of Congress (LOC), but these exemptions are not easy to receive. In order to receive an exemption, activists must not only propose new exemptions, but also plead for ones already granted to be continued. The system is flawed, as users often need to have a way to circumvent their devices to make full use of the products. However, the LOC has recently released its new list of exemptions, and this expanded list represents a small victory for digital rights activists.

The exemptions granted will go into effect in 2016, and cover 22 types of uses affecting movies, e-books, smart phones, tablets, video games and even cars. Some of the highlights of the exemptions are as follows:

  • Movies where circumvention is used in order to make use of short portions of the motion pictures:
    • For educational uses by University and grade school instructors and students.
    • For e-books offering film analysis
    • For uses in noncommercial videos
  • Smart devices
    • Can “jailbreak” these devices to allow them to interoperate with or remove software applications, allows phones to be unlocked from their carrier
    • Such devices include, smart phones, televisions, and tablets or other mobile computing devices
      • In 2012, jailbreaking smartphones was allowed, but not tablets. This distinction has been removed.
    • Video Games
      • Fan operated online servers are now allowed to support video games once the publishers shut down official servers.
        • However, this only applies to games that would be made nearly unplayable without the servers.
      • Museums, libraries, and archives can go a step further by jailbreaking games as needed to get them functioning properly again.
    • Computer programs that operate things primarily designed for use by individual consumers, for purposes of diagnosis, repair, and modification
      • This includes voting machines, automobiles, and implantation medical devices.
    • Computer programs that control automobiles, for purposes of diagnosis, repair, and modification of the vehicle

These new exemptions are a small, but significant victory for consumers under the DMCA. The ability to analyze your automotive software is especially relevant in the wake of the aforementioned Volkswagen emissions scandal. However, the exemptions are subject to some important caveats. For example, only video games that are almost completely unplayable can have user made servers. This means that games where only an online multiplayer feature is lost, such servers are not allowed. A better long-term solution is clearly needed, as this burdensome process is flawed and has led to what the EFF has called “unintended consequences.” Regardless, as long as we still have this draconian law, exemptions will be welcomed. To read the final rule, register’s recommendation, and introduction (which provides a general overview) click here.


The Legal Persona of Electronic Entities – Are Electronic Entities Independent Entities?

Natalie Gao, MJLST Staffer

The advent of the electronic age brought about digital changes and easier accessibility to more information but with this electronic age came certain electronic problems. One such problem is whether or not electronic entities like, (1) usernames online, (2) software agents, (3) avatars, (4) robots, and (5) artificial intelligence, are independent entities under law. A username for a website like eBay or for a forum, for all intents and purposes may well be just a pseudonym for the person behind the computer. But at what point does the electronic entity become an independent entity, and at what point does the electronic entity start have the rights and responsibilities of a legally independent entity?

In 2007, Plaintiff Marc Bragg brought suit against Defendants Linden Research Inc. (Linden), owner of the massive multiplayer online role playing game (MMORPG) Second Life, and its Chief Executive Officer. Second Life is a game with a telling title and it essentially allows its players to have a second life. It has a market for goods, extensive communications functions, and even a red-light district, and real universities have been given digital campuses in the game, where they have held lectures. Players of Second Life purchase items and land in-game with real money.

Plaintiff Bragg’s digital land was frozen in-game by moderators due to “suspicious” activity(s) and Plaintiff brought suit claiming he had property rights to the digital land. Bragg v. Linden Research, Inc., like its descendants including Evans v. Linden Research, Inc. (2011), have been settled out of court and therefore do not offer the legal precedents it could potentially have had regarding its unique fact pattern(s). And Second Life is also a very unique game because pre-2007, Linden had been promoting Second Life by announcing they recognize virtual property rights and that whatever users owned in-game would be belong to the user instead of to Linden. But can the users really own digital land? Would it be the users themselves owning the ditigal land or the avatars they make on the website, the ones living this “second life”, be the true owners? And at what point can avatars or any electronic entity even have rights and responsibilities?

An independent entity is not the same as a legal independent entity because an latter, beyond just existing independently, has rights and responsibilities pursuant to law. MMORPGs may use avatars to allow users to play games and avatars may be one step more independent than a username, but is that avatar an independent entity that can, for example, legally conduct commercial transactions? Or rather, is the avatar conducting a “transaction” in a leisure context? In Bragg v. Linden Research, Inc., the court touches on the issue of transactions but it rules only on civil procedure and contract law. And what about avatars existing now in some games that can play itself? Is “automatic” enough to make something an “independent entity”?

The concept of an independent electronic entity is discussed in length in Bridging the Accountability Gap: Rights for New Entities in the Information Society. Authors Koops, Hildebrandt, and Jaquet-Chiffelle compares the legal personhood of electronic artificial entities with animals, ships, trust funds, and organizations, arguing that giving legal personhood to basically all (or just “all”) currently existing electronic entities bring up problems such as needing representation with agency, lacking the “intent” required for certain crimes and/or areas of law, and likely needing to base some of their legal appeals in area of human/civil rights. The entities may be “actants” (in that they are capable of acting) but they are not always autonomous. A robot will need mens rea to assess responsibility, and none of the five listed entities do not have consciousness (which animals do have), let alone self-consciousness. The authors argue that none of the artificial entities fit the prima facies definition of a legal person and instead they moved to evaluate the entities on a continuum from automatic (acting) to autonomic (acting on its own), as well as the entity’s ability to contract and bear legal responsibility. And they come up with three possible solutions, one “Short Term”, one “Middle Term”, and one “Long Term”. The Short Term method, which seems to be the most legally feasible under today’s law, purposes creating a corporation (a legally independent entity) to create the electronic entity. This concept is reminiscent of theorist Gunther Teubner’s idea of a using a hybrid entity, one that combines an electronic agent(s) with a company with limited liability, instead of an individual entity to give something rights and responsibilities.

Inevitably, even though under the actual claims brought to the court, Bragg v. Linden Research, Inc. mostly seems more like an open-source licensing issue than an issue of electronic independent entity, Koops, Hildebrandt, and Jaquet-Chiffelle still tries to answer some questions that may be very salient one day. Programs can be probabilistic algorithms but no matter how unpredictable the program may be, their unpredictability is fixed in the algorithm. An artificial intelligence (AI), a program that grows and learns and create unpredictability on its own, may be a thing of science fiction and The Avengers, may one day be reality. And an AI does not have to be the AI of IRobot; it does not have to have a personality. At what point will we have to treat electronic entities as legally autonomic and hold it responsible for the things it has done? Will the future genius-programmer, who creates an AI to watch over the trusts in his/her care, be held accountable when that AI starts illegally funneling money out to the AmeriCorp bank account the AI was created to watch over, into the personal saving accounts of lamer non-MJLST law journals in the University of Minnesota? Koops, Hildebrandt, and Jaquet-Chiffelle argues yes, but it largely depends on the AI itself and the area of law.


Data Breach and Business Judgment

Quang Trang, MJLST Staffer

Data breaches are a threat to major corporations. Corporations such as Target Co. and Wyndham Worldwide Co. have been victim of mass data breaches. The damage caused by such breaches have led to derivative lawsuits being filed by shareholders to hold board of directors responsible.

In Palkon v. Holmes, 2014 WL 5341880 (D. N.J. 2014), Wyndham Worldwide Co. shareholder Dennis Palkon filed a lawsuit against the company’s board of directors. The judge granted the board’s motion to dismiss partially because of the Business Judgment Rule. The business judgement rule governs when boards refuse shareholder demands. The principle of the business judgment rule is that “courts presume that the board refused the demand on an informed basis, in good faith and in honest belief that the action taken was in the best interest of the company.” Id. The shareholder who brings the derivative suit has the burden to rebut the presumption that the board acted in good faith or that the board did not base its decision on reasonable investigation.

Cyber security is a developing area. People are still unsure how prevalent the problem is and how damaging it is. It is difficult to determine what a board needs to do with such ambiguous information. In a time when there is no set corporate cyber security standards, it is difficult for a shareholder to show bad faith or lack of reasonable investigation. Until clear standards and procedures for cyber security are widely adopted, derivative suits over data breaches will likely be dismissed such as in Palkon.


E.C.J Leaves U.S. Organizations to Search for Alternative Data Transfer Channels

J. Adam Sorenson, MJLST Staffer

The Court of Justice of the European Union (E.C.J.), the European’s top court, immediately invalidated a 15-year-old U.S. EU Safe Harbor Program Oct. 6th (Schrems v. Data Prot. Comm’r, E.C.J., No. C-362/14, 10/6/15). This left the thousands of businesses which use this program without a reliable and lawful way to transfer personal data from the European Economic Area to the United States.

The Safe Harbor Program was developed by the U.S. Department of Commerce in consultation with the European Commission. It was designed to provide a streamlined and cost-effective means for U.S. organizations to comply with the European Commission’s Directive on Data Protection (Data Protection Directive) which went into effect October of 1998. The program allowed U.S. organizations to voluntarily join and freely transfer personal data out of all 28 member states if they self-certify and comply with the programs 7 Safe Harbor Privacy Principles. The program was enforced by the U.S. Federal Trade Commission. Schrems v. Data Prot. Comm’r, however, brought a swift halt to the program.

This case revolves around Mr. Schrems, an Australian Facbook user since 2008 living in Austria. Some or all of the data collected by the social networking site Facebook is transferred to servers in the United States where it undergoes processing. Mr. Schrems brought suit against the Data Protection Commissioner after he did not exercise his statutory authority to prohibit this transfer. The case applied to a 2000 decision by the European Commission which found the program provided adequate privacy protection and was in line with the Data Protection Directive. The directive prohibits “transfers of personal data to a third country not ensuring an adequate level of protection.”(Schrems) The directive goes on to say that adequate levels may be inferred if a third country ensures an adequate level of protection.

The E.C.J. found that the current Safe Harbor Program did not ensure an adequate level of protection, and therefore found the 2000 decision and the program itself as invalid. This means all U.S. organizations currently transferring personal data out of the EEA are doing so in violation of the Data Protection Directive. This case requires U.S. organizations to find alternative methods of approved data transfer, which generally means seeking the approval of data protection authorities in the EU, which can be a long process.

Although the EU national data protection authorities may allow for some time before cracking down on these U.S. organization, this decision signals a massive shift in the way personal data is transferred between the U.S. and Europe, and will most likely have ripple effects throughout the data privacy and data transfer worlds.