October 2014

FCC Issues Notice of Proposed Rulemaking to Ensure an Open Internet, Endangers Mid-Size E-Commerce Retailers

Emily Harrison, MJLST Staff

The United States Court of Appeals for the D.C. Circuit twice struck down key provisions of the Federal Communication Commission’s (FCC) orders regarding how to ensure an open Internet. The Commission’s latest articulation is its May 15, 2014 notice of proposed rulemaking, In the Matter of Protecting the Open Internet. According to the proposed rulemaking, it seeks to provide “broadly available, fast and robust Internet as a platform for economic growth, innovation, competition, free expression, and broadband investment and deployment.” The notice of proposed rulemaking includes legal standards previously affirmed by the D.C. Circuit in Verizon v. FCC, 740 F.3d 623 (2014). For example, the FCC relies on Verizon for establishing how the FCC can utilize Section 706 of the Telecommunications Act of 1996 as its source of authority in promulgating Open Internet rules. Additionally, Verizon explained how the FCC can employ a valid “commercially reasonable” standard to monitor the behavior of Internet service providers.

Critics of the FCC’s proposal for network neutrality argue that the proposed standards are insufficient to ensure an open Internet. The proposal arguably allows broadband carriers to offer “paid prioritization” services. The sale of this prioritization not only leads to “fast” and “slow” traffic lanes, but also allows broadband carriers to charge content providers for priority in “allocating the network’s shared resources,” such as the relatively scarce bandwidth between the Internet and an individual broadband subscriber.

Presuming that there is some merit to the critics’ arguments, if Internet Service Providers (ISPs) could charge certain e-commerce websites different rates to access a faster connection to customers, the prioritized websites could gain a competitive advantage in the marketplace. Disadvantaged online retailers could see a relative decrease in their respective revenue. For example, without adequate net neutrality standards, an ISP could prioritize certain websites, such as Amazon or Target, and allow them optimal broadband speeds. Smaller and mid-sized retail stores may only have the capital to access a slower connection. As a result, customers would consistently have a better retail experience on the websites of larger retailers because of the speed in which they can view products or complete transactions. Therefore, insufficient net neutrality policies could potentially have a negative effect on the bottom line of many e-commerce retailers.

Comments can be submitted in response to the FCC’s notice of proposed rulemaking at: http://www.fcc.gov/comments


Self-Driving Vehicles Are Coming

Spencer Peck, RA State and Local Policy Program, MJLST Guest Blogger

Self-driving vehicles are coming, possibly within the decade. But what exactly do drivers, laws and lawmakers, and local economies need to do to prepare for autonomous vehciles? On Friday, October 31 technical, legal, and policy experts will gather at the Humphrey School of Public Affairs to discuss exactly this. More information about the all-day conference, Autonomous Vehicles: The Legal and Policy Road Ahead, is available by following the link.

Self-driving vehicles (SDVs) are the future of automotive transportation. Driverless cars are often discussed as a “disruptive technology” with the ability to transform transportation infrastructure, expand access, and deliver benefits to a variety of users. Some observers estimate limited availability of driverless cars by 2020 with wide availability to the public by 2040. Recent announcements by Google and other major automakers indicate huge potential for development in this area. In fact, an Audi RS7 recently self-piloted around the famous Hockheimring race track. The fully autonomous car reached 150mph and even recorded a lap that was 5 seconds faster than a human competitor! The federal automotive regulator, the National Highway Traffic Safety Administration (NHTSA), issued a policy statement about the potentials of self-driving cars and future regulatory activity in mid-2013. The year 2020 is the most often quoted time frame for the availability of the next level of self-driving vehicles, with wider adoption in 2040-2050. However, there are many obstacles to overcome to make this technology viable, widely available, and permissible. These include developing technology affordable enough for the consumer market, creating a framework to deal with legal and insurance challenges, adapting roadways to vehicle use if necessary, and addressing issues of driver trust and adoption of the new technology. There is even some question as to who will be considered the ‘driver’ in the self-driving realm.

Although self-driving cars are few and far between, the technology is becoming ever-more present and legally accepted. For example, NHTSA requires all newly manufactured cars to have at least a low-level of autonomous vehicle technology. Some scholars even suggest that self-driving vehicles are legal under existing legal frameworks. Five states have some form of legislation expressly allowing self-driving cars or the testing of such vehicles within state boundaries. In fact, two states–California and Nevada–have even issued comprehensive regulations for both private use and testing of self-driving vehicles. Several companies, most notably Google (which drove over 500,000 miles on its original prototype vehicles), are aggressively pursuing the technology and advocating for legal changes in favor of SDVs. Automotive manufacturers from Bosch to Mercedes to Tesla are all pursuing the technology, and frequently provide updates on their self-driving car plans and projects.

The substantial benefits derived from SDVs are hard to ignore. By far the greatest implication referenced by those in the field is related to safety and convenience. NHTSA’s 2008 Crash Causation survey found that close to 90% of crashes are caused by driver mistakes. These mistakes, which include distractions, excessive speed, disobedience of traffic rules or norms, and misjudgment of road conditions, are factors within control of the driver. Roadway capacity improvement often means improvements in throughput, the maximum number of cars per hour per lane on a roadway, but can extend to other capacity concerns. Other hypothesized improvements include fewer necessary lanes due to increased throughput, narrower lanes because of accuracy and driving control of SDVs, and a reduction in infrastructure wear and tear through fewer crashes. While supplemental transportation programs and senior shuttles have provided needed services in recent decades, SDVs have the ability to expand the user base of cars to those who would normally be unable to physically drive. The elderly, disabled, and even children may be beneficiaries.


Is the US Ready for the Next Cyber Terror Attack?

Ian Blodger, MJLST Staff Member

The US’s military intervention against ISIL carries with it a high risk of cyber-terror attacks. The FBI reported that ISIL and other terrorist organizations may turn to cyber attacks against the US in response to the US’s military engagement of ISIL. While no specific targets have been confirmed, likely attacks could result in website defacement to denial of service attacks. Luckily, recent cyber terror attacks attempting to destabilize the US power grid failed, but next time we may not be so lucky. Susan Brenner’s recent article, Cyber-threats and the Limits of Bureaucratic Control, published in the Minnesota Journal of Law Science and Technology volume 14 issue 1, describes the structural reasons for the US’s vulnerabilities to cyber attacks, and offers one possible solution to the problem.

Brenner argues that the traditional methods of investigation do not work well when it comes to cyber attacks. This ineffectiveness results from the obscured origin and often hidden underlying purpose of the attack, both of which are crucial in determining whether a law enforcement or military response is necessary. The impairment leads to problems assessing which agency should control the investigation and response. A nation’s security from external attackers depends, in part, on its ability to present an effective deterrent to would be attackers. In the case of cyber attacks, however, the US’s confusion on which agency should respond often precludes an efficient response.

Brenner argues that these problems are not transitory, but will increase in direct proportion to our reliance on complex technology. The current steps taken by the US are unlikely to solve the issue since they do not address the underlying problem, instead continuing to approach cyber terrorists as conventional attackers. Concluding that top down command structures are unable to respond effectively to the treat of cyber attacks, Brenner suggests a return to a more primitive mode of defense. Rather than trusting the government to ensure the safety of the populace, Brenner suggests citizens should work with the government to ensure their own safety. This decentralized approach, modeled on British town defenses after the fall of the Roman Empire, may avoid the ineffective pitfalls of the bureaucratic approach to cyber security.

There are some issues with this proposed model for cyber security, however. Small British towns during the early middle ages may have been able to ward off attackers through an active citizen based defense, but the anonymity of the internet makes this approach challenging when applied to a digitized battlefield. Small British towns were able to easily identify threats because they knew who lived in the area. The internet, as Brenner concedes, makes it difficult to determine to whom any given person pays allegiance. Presumably, Brenner theorizes that individuals would simply respond to attacks on their own information, or enlist the help of others to fed off attacks. However, the anonymity of the internet would mean utter chaos in bolstering a collective defense. For example, an ISIL cyber terrorist could likely organize a collective US citizen response against a passive target by claiming they were attacked. Likewise, groups utilizing pre-emptive attacks against cyber terrorist organizations could be disrupted by other US groups that do not recognize the pre-emptive cyber strike as a defensive measure. This simply shows that the analogy between the defenses of a primitive British town and the Internet is not complete.

Brenner may argue that her alternative simply calls for current individuals, corporations, and groups to build up their own defenses and protect themselves from impending cyber threats. While this approach would avoid the problems inherent in a bureaucratic approach, it ignores the fact that these groups are unable to protect themselves currently. Shifting these groups’ understanding of their responsibility of self defense may spur innovation and increase investment in cyber protection, but this will likely be insufficient to stop a determined cyber attack. Large corporations like Apple, JPMorgan, Target, and others often hemorrhage confidential information as a result of cyber attacks, even though they have large financial incentives to protect that information. This suggests that an individualized approach to cyber protection would also likely fail.

With the threat of ISIL increasing, it is time for the United States to take additional steps to reduce the threat of a cyber terror attack. At this initial stage, the inefficiencies of bureaucratic action will result in a delayed response to large-scale cyber terror attacks. While allowing private citizens to band together for their own protection may have some advantages over government inefficiency, this too likely would not solve all cyber security problems.


The Benefits of Juries

Steven Groschen, MJLST Staff Member

Nearly 180 years ago Alexis de Tocqueville postulated that jury duty was beneficial to those who participated. In an often quoted passage of Democracy in America he stated that “I do not know whether the jury is useful to those who have lawsuits, but I am certain it is highly beneficial to those who judge them.” Since that time many commentators, including the United States Supreme Court, have echoed this belief. Although this position possesses a strong intuitive appeal, it is necessary to ask whether there is any evidentiary basis to support it. Up until recently, the scientific evidence on the effects of serving on a jury was scant. Fortunately for proponents of the jury system, the research of John Gastil is building a scientific basis for the positive effects of jury duty.

One of Gastil’s most extensive studies focused on finding a correlation between serving on a jury and subsequent voting patterns. For purposes of the study, Gastil and his colleagues compiled a large sample of jurors from various counties–8 total–across the United States. Next, the research team gathered voting records for jurors in the sample–examining each juror’s voting patterns five years prior and subsequent to serving on a jury. Finally, regression analyses were performed on the data and some interesting effects were discovered. Individuals who were infrequent voters prior to serving as a juror on a criminal trial were 4-7% more likely to vote after serving. Interestingly, this effect held for the group of previously infrequent voters regardless of the verdict reached in the criminal trials they served on. Further, for hung juries the effect held and was even stronger.

Despite these findings, the jury is still out on whether the scientific evidence is substantial enough to support the historically asserted benefits of jury duty. More evidence is certainly needed, however, important policy questions regarding jury duty are already implicated. As researchers begin correlating jury participation with more aspects of civic life, there remains a possibility that negative effects of serving on a jury may be discovered. Would such findings serve as a rationale for modifying the jury selection process in order to preclude those who might be negatively affected? More importantly, do further findings of positive effects suggest more protections are needed during the voir dire process to ensure certain classes are not excluded from serving on a jury and thus receiving those benefits?


America’s First Flu Season Under the ACA

Allison Kvien, MJLST Staff Member

Have you seen the “flu shots today” signs outside your local grocery stores yet? Looked at any maps tracking where in the United States flu outbreaks are occurring? Gotten a flu shot? This year’s flu season is quickly approaching, and with it may come many implications for the future of health care in this country. This year marks the first year with the Patient Protection and Affordable Care Act (ACA) in full effect, so thousands of people in the country will get their first taste of the ACA’s health care benefits in the upcoming months. The L.A. Times reported that nearly 10 million previously uninsured people now have coverage under the ACA. Though there might still be debate between opponents and proponents of the ACA, the ACA has already survived a Supreme Court challenge and is well on its way to becoming a durable feature of the American healthcare system. Will the upcoming flu season prove to be any more of a challenge?

In a recent article entitled, “Developing a Durable Right to Health Care” in Volume 14, Issue 1 of the Minnesota Journal of Law, Science, and Technology, Erin Brown examined the durability of the ACA going forward. Brown explained, “[a]mong its many provisions, the ACA’s most significant is one that creates a right to health care in this country for the uninsured.” Another provision of the ACA is an “essential benefits package,” in which Congress included “preventative and wellness services,” presumably including flu shots. For those that will be relying on the healthcare provided by the ACA in the upcoming flu season, it may also be important to understand where the ACA’s vulnerabilities lie. Brown posited that the vulnerabilities are concentrated mostly in the early years of the statute, and the federal right to health care may strengthen as the benefits take hold. How will the end of the ACA’s first year go? This is a very important question for many Americans, and Brown’s article examines several other questions that might be on the minds of millions in the upcoming months.


Infinite? in the Political Realm, the Internet May Not Be Big Enough for Everyone

Will Orlady, MJLST Staff Member

The Internet is infinite. At least, that’s what I thought. But Ashley Parker, a New York Times reporter doesn’t agree. When it comes to political ad space, our worldwide information hub may not be the panacea politicians hoped for this election season.

Parker based her argument on two premises. First, not all Internet content providers are equal, at least when it comes to attracting Internet traffic. Second, politicians–especially those in “big” elections–wish to reach more people, motivating their campaigns to run ads on a major content hubs i.e. YouTube.

But sites like YouTube can handle heavy network traffic. And, for the most part, political constituents do not increase site traffic for the purpose of viewing (or hearing) political ads. So what serves to limit a site’s ad space if not its own physical technology that facilitates the site’s user experience? Parker contends that the issue is not new: it’s merely a function of supply and demand.

Ad space on so-called premium video streaming sites like YouTube is broken down into two categories: ads that can be skipped (“skip-able ads”) and ads that must be played entirely before you reach the desired content (“reserved by ads”). The former is sold without exhaustion at auction, but the price of each ad impression increases with demand. The latter is innately more expensive, but can be strategically purchased for reserved times slots, much like television ad space.

Skip-able ads are available for purchase without regard to number. But they are limited by price and by desirability. Because they are sold by auction, in times of high demand (during a political campaign, for example) Parker contends that their value can increase ten-fold. Skip-able ads are, however, most seriously limited by their lack of desirability. Assuming, as I believe it is fair to do here, that most Internet users actually skip the skip-able ads, advertising purchasers would be incentivized to purchase a site’s “reserved by” advertising space.

“Reserved by” ads are sold as their name indicates, by reservation. And if the price of certain Internet ad space is determined by time or geography, it is no longer fungible. Thus, because not all Internet ad space is the same in price, quality, and desirability, certain arenas of Internet advertising are finite.

Parker’s argument ends with the conclusion that political candidates will now compete for ad space on the Internet. This issue, however, is not necessarily problematic or novel. Elections have always been adversarial. And I am not convinced that limited Internet ad space adds to campaign vitriol. An argument could be made to the contrary: that limited ad space will confine candidate to spending resources on meaningful messages about election issues rather than smear tactics. Campaign tactics notwithstanding, I do not believe that the Internet’s limited ad space presents an issue distinct from campaign advertising in other media. Rather, Parker’s argument merely forces purchasers and consumers of such ad space to consider the fact that the internet, as an advertising and political communication medium, may be more similar to existing media than some initially believed.


A Review of Replay Technology in Major League Baseball

Comi Sharif, Managing Editor

This week marks the end of the 2014 Major League Baseball regular season, and with it, the completion of the first regular season under the league’s expanded rules regarding the use of instant replay technology. Though MLB initially resisted utilizing instant replay, holding out longer than other American professional sport leagues, an agreement between team owners, the players association, and the umpires association produced a gross expansion of the use of replay technology beginning this season.

The expanded rules permit managers to “challenge” at least one call made by an umpire during a game. The types of calls allowed to be challenged are limited to objective plays such as whether a runner was safe or out at a base, or whether a fielder caught or “trapped” a batted ball. Subjective umpire calls, including calls regarding balls and strikes and “check” swings, are not reviewable. The complete set of MLB’s instant replay rules is available here.

As alluded to above, the process of going from the idea of instant replay in baseball to actual implementation was long and complex. First, rule changes must be collectively bargained by MLB and the players association (MLBPA). Thus, the proposal to expand the use of instant replay had to be proposed during the recent collective bargaining agreement (CBA) discussions in 2011. What both sides agreed to was language in the CBA stating that subject to approval by the umpires association, MLB baseball could expand the use of instant replay. Second, after agreeing to general idea of more instant replay, MLB developed specific rules and policies for instant replay, which had to be approved by the owners of the 30 MLB franchises. Once the owners approved the specific rules, which they did unanimously, the rules could finally be put into action. One issue to watch is how each of the different parties involved in the approval process reacts to the changes instant replay has on the league. The current CBA expires in December of 2016, at which time wholesale changes to the current instant replay system could be realized.

The replay technology used by MLB is somewhat unique compared to that used by other professional sports leagues such as the National Basketball Association and the National Football League. Often in the NBA and NFL, referees or officials view video replays of a contested call themselves with technology located at the playing venue itself. MLB, however, created a “Replay Operation Center” (ROC), located at MLB headquarters in New York City, where a team of umpires reviews video replays and communicates a final ruling through headsets to the umpires on the field. Additionally, MLB permits each team to have a “video specialist” located in the clubhouse to watch for challengeable plays; the specialist can call the manager by phone to communicate whether or not the play should be challenged.

In one sense, the MLB system may be advantageous because it allows the ROC to have the best available technology, whereas the NBA and NFL have to adapt the sophistication of their replay systems to make it possible for use at every stadium and to the referees or officials at the venue immediately. While the NFL and NBA referees and officials typically look at one relatively small monitor when reviewing a play, the ROC houses 37 high-definition televisions, each of which can be subdivided into 12 smaller screens. Though this may not seem like a big deal to the casual observer, a number of calls are so close that the quality of the image available on replay can directly impact the call. One might conclude, then, that because MLB has more advanced technology at its disposable, its replay system is, in fact, more accurate. The MLB system does have its downsides, however. Outsourcing the review process can lead to lengthy delays and put decisions in the hands of an umpire thousands of miles away from the action, which many find unappealing.

The site Retrosheet has a comprehensive collection of data on MLB’s replay system, including an entry for every play reviewed, its result, and the length of time taken for the review to be completed.

Overall, there are mixed reviews concerning the success of the expanded replay rules used in MLB this season. Though it’s unclear exactly how MLB will adjust its system in the future, if the current trend continues, as increasingly effective technology becomes available, the impact of that technology on the sport of baseball is only likely to rise as well.


Apple’s Bark Is Worse Than Its Bite

Jessica Ford, MJLST Staff

Apple’s iPhone tends to garner a great deal of excitement from its aficionados for its streamlined aspects and much resentment from users craving customization on their devices. Apple’s newest smartphone model, the iPhone 6, is no exception. However, at Apple’s September 9, 2014 iPhone 6 unveiling, Apple announced that the new iOS 8 operating system encrypts emails, photos, and contacts when a user assigns a passcode to the phone. Apple is unable to bypass a user’s passcode under the new operating system and is accordingly unable to comply with government warrants demanding physical data extraction from iOS 8 devices.

The director of the FBI, James Comey, has already voiced concerns that this lack of access to iOS 8 devices could prevent the government from gathering information on a terror attack or child kidnappings.

Comey is not the only one to criticize Apple’s apparent attempt to bypass legal court orders and warrants. Orin Kerr, a criminal procedure and computer crime law professor at The George Washington University Law School, worries that this could essentially nullify the Supreme Court’s finding in Riley v. California this year which requires the police to have a warrant before searching and seizing the contents of an arrested individual’s cell phone.

However, phone calls and text messages are not encrypted, and law enforcement can gain access to that data by serving a warrant upon wireless carriers. Law enforcement can also tap and monitor cellphones by going through the same process. Any data backed to iCloud, including iMessages and photos, can be accessed under a warrant. The only data that law enforcement would not be able to access without a passcode is data normally backed up to iCloud that still remains on the device.

While security agencies argue otherwise, iOS 8 seems far from rendering Riley’s warrants useless. Law enforcement still has several viable options to gain information with a warrant. Furthermore, the Supreme Court has already made it clear that it does not find that the public’s interest in solving or preventing crimes outweighs the public’s interest in privacy of phone data, even when there is a chance that the data on a cell phone at issue will be encrypted once the passcode locks the phone,

“[I]n situations in which . . . an officer discovers an unlocked phone, it is not clear that the ability to conduct a warrantless search would make much of a difference. The need to effect the arrest, secure the scene, and tend to other pressuring matters means that law enforcement officers may well not be able to turn their attention to a cell phone right away . . . . If ‘the police are truly confronted with a ‘now or never’ situation,’ . . . they may be able to rely on exigent circumstances to search the phone immediately . . . . Or, if officers happen to seize a phone in an unlocked state, they may be able to disable a phone’s automatic-lock feature in order to prevent the phone from locking and encrypting data . . . . Such a preventive measure could be analyzed under the principles set forth in our decision in McArthur, 531 U.S. 326, 121 S.Ct. 946, which approved officers’ reasonable steps to secure a scene to preserve evidence while they awaited a warrant.” (citations omitted) Riley v. California, 134 S. Ct. 2473, 2487-88 (2014).

With all the legal recourse that remains open, it appears somewhat hasty for the paragon-of-virtue FBI to be crying “big bad wolf.”


Cable TV Providers and the FCC’s Policy-Induced Competition Amidst Changing Consumer Preferences

Daniel Schueppert, MJLST Executive Editor

More and more Americans are getting rid of their cable TV and opting to consume their media of choice through other sources. Roughly 19% of American households with a TV do not subscribe to cable. This change in consumer preferences means that instead of dealing with the infamous “Cable Company Runaround” many households are using their internet connection or tapping into local over-the-air broadcasts to get their TV fix. One of the obvious consequences of this change is that cable TV providers are losing subscribers and may become stuck carrying the costs of existing infrastructure and hardware. Meanwhile, the CEO of Comcast’s cable division announced that “it may take a few years” to fix the company’s customer experience.

In 2011 Ralitza A. Grigorova-Minchev and Tomas W. Hazlett published an article entitled Policy-Induced Competition: The Case of Cable TV Set-Top Boxes in Volume 12 Issue 1 of the Minnesota Journal of Law, Science & Technology. In their article the authors noted that despite the FCC’s policy efforts to bring consumer cable boxes to retail stores like Best Buy, the vast majority of cable subscribing households in America received their cable box from their cable TV operators. In the national cable TV market the two elephants in the room are Comcast and Time Warner Cable. One of these two operators are often the only cable option in certain areas and together they provide over a third of the broadband internet and pay-TV services in the nation. Interestingly, Comcast and Time Warner Cable are currently pursuing a controversial $45 billion merger and in the process both companies are shrewdly negotiating concessions by TV networks and taking shots at Netflix in FCC filings.

The current fad of cutting cable TV implicates a pushback against the traditional policy of vertically integrating media, infrastructure, customer service, and hardware like cable boxes into one service. In contrast to the expensive cable box hardware required and often provided by traditional cable, internet media streaming onto a TV can usually be achieved by any number of relatively low cost and multi-function consumer electronic devices like Google’s Chromecast. This arguably gives customers more control over their media experience by providing the ability to choose which hardware-specific services they bring into their home. If customers no longer want to be part of this vertical model, big companies like Comcast may find it difficult to adjust to changing consumer preferences given the considerable regulatory pressure discussed in Grigorova-Minchev and Hazlett’s article.