Articles by mjlst

An Authorship-Centric Approach to the Authentication of Social-Networking Evidence

Sen “Alex” Wang, MJLST Staff Member

In Volume 13 Issue 1 of the Minnesota Journal of Law, Science & Technology, Ira P. Robbins called for special attention for social-networking evidence used in civil and criminal litigation and proposed an authorship-centric approach to the authentication of such evidence. In recent years, social-networking websites like Facebook, MySpace, and Twitter have become an ingrained part of our culture. However, at least as it appears to Robbins, people are stupid with regard to their online postings since they document their every move no matter how foolish or incriminating on social-networking sites. The lives and careers of not only ordinary citizens, but also lawyers, judges, and even Congress members have been damaged by their own social-networking postings.

Social-networking sites are designed to facilitate interpersonal relationships and information exchanges, but they have also been used to harass, intimidate, and emotionally abuse or bully others. With no effective check on fake accounts or false profiles, the anonymity of social-networking sites permits stalkers and bullies to take their harmful conduct above and beyond traditional harrying. The infamous Lori Drew and Latisha Monique Frazier cases provide excellent examples. Moreover, hackers and identity thieves have also taken advantages of the personal information posted on social-networking sites. Thus, Robbins argued that the growth in popularity of social-networking sites and the rising number of fake accounts and incidents of hacking signal that information from social-networking sites will begin to play a central role in both civil and criminal litigation.

Often unbeknownst to the social-networking user, postings leave a permanent trail that law-enforcement agents and lawyers frequently rely upon in crime solving and trial strategy. Robbins argued that the ease with which social-networking evidence can be altered, forged, or posted by someone other than the owner of the account should raise substantial admissibility concerns. Specifically, Robbins stated that social-networking postings are comparable to postings on websites rather than e-mails. Thus, the authentication of social-networking evidence is the critical first step to ensuring that the admitted evidence is trustworthy and, ultimately, that litigants receive a fair and just trial.

Robbins, however, further argued that the current judicial approaches to authentication of such evidence have failed to require rigorous showings of authenticity despite the demonstrated unreliability of information on social-networking sites. In the first approach, the court effectively shirks its gate-keeping function, deflecting all reliability concerns associated with social-networking evidence to the finder of fact. Under the second approach, the court authenticates a social-networking posting by relying solely on testimony of the recipient. The third approach requires testimony about who, aside from the owner, can access the social-networking account in question. With the fourth approach, the court focuses on establishing the author of a specific posting but failed to provide a thorough framework.

As a solution, Robbins proposed an authorship-centric approach that instructs courts to evaluate multiple factors when considering evidence from social-networking websites. The factors fall into three categories: account security, account ownership, and the posting in question. Although no one factor in these categories is dispositive, addressing each will help to ensure that admitted evidence possesses more than a tenuous link to its purported author. For account security, the inquiry should include at least the following questions: (1) Does the social-networking site allow users to restrict access to their profiles or certain portions of their profiles? (2)Is the account that was used to post the proffered evidence password protected? (3) Does anyone other than the account owner have access to the account? (4) Has the account been hacked into in the past? (5) Is the account generally accessed from a personal or a public computer? (6) How was the account accessed at the time the posting was made? As to account ownership, a court should address, at a minimum, the following key questions: (1) Who is the person attached to the account that was used to post the proffered evidence? (2) Is the e-mail address attached to the account one that is normally used by the person? (3) Is the alleged author a frequent user of the social-networking site in question? Finally, the court should ask at least these questions regarding the posting in question: (1) How was the evidence at issue placed on the social-networking site? (2) Did the posting at issue come from a public or a private area of the social-networking website? (3) How was the evidence at issue obtained from the website?

This authorship-centric approach properly shifts a court’s attention from content and account ownership to authorship, it underscores the importance of fairness and accuracy in the outcome of judicial proceedings that involve social-networking evidence. In addition, it fit within the current circumstantial-evidence authentication framework set out by Federal Rules of Evidence 901(b)(4) and will not require the courts to engage in a more exhaustive inquiry than is already required for other types of evidence.


Republicans Win, Earth Loses

Vinita Banthia, MJLST Staff Member

The results of last Tuesday’s midterm elections were somewhat of a victory for climate-change deniers around the country. In Iowa, Joni Ernst, a long-time climate change denier beat Democratic candidate Bruce Braley in the race for Senate. Ernst has remarked that she has “not seen proven proof that [climate change] is entirely man-made.” Meanwhile Colorado elected climate change skeptic Cory Gardner over Mark Udall and Oklahoma elected one of the environment’s biggest enemies: James Inhofe. Inhofe has long believed that the dangers of climate change are a hoax and recently wrote a book expressing the same sentiment. Ironically, Inhofe will also serve as the new chair of the Senate Environment and Public Works Committee.

At the United Nations Climate Summit on September 23, 2014, President Obama pledged to world leaders that the United States is committed to doing its part in reducing carbon emissions while also maintaining economic growth. This extended Republican majority in the House will push back President Obama’s Climate Action Plan and the nation’s environmental policy, and will increase resistance to the Environmental Protection Agency’s heightened regulations, but will not entirely seize efforts on that front because the White House remains dedicated to advancing its climate change agenda and the President’s veto will prevent drastic changes in the current law and policy.

In the last year, the U.S. has reduced its carbon production more than any other nation, and the President continues to push this trend to meet his office’s goal of reducing carbon emissions to 17% below 2005 levels by 2020. As part of the Climate Action Plan, the White House aims to work with states and companies to cut the amount of carbon emissions from power plants, which will be the one of the biggest step in reducing carbon emissions in the nation’s history. In addition, new actions are being taken to encourage and implement alternative sources of energy (such as hydroelectric, solar and wind power generators) which will save consumers $10 billion on their bills and reduce carbon pollution by 300 million metric tons by 2030.

Finally, the U.S. aims to work with private companies to reduce hydrofluorocarbons (HFCs) in similar ways as they limited ozone-depleting chemicals such as chlorofluorocarbon (CFC). In addition to taking measures domestically, the U.S. is working with developing nations to find sustainable and clean ways to build infrastructure and create economic growth. For example, the President has formed partnerships with African farmers to implement sustainable agricultural practices.

Despite the recent elections of many Republicans who fail to appreciate the immediate dangers of climate change, many environmentalists, including David Doniger of the Natural Resources Defense Council, say that this kind of climate denial could have a negative impact on the party’s popularity in the long run. Colbert Report host, Stephen Colbert mocked several Republicans for adamantly denying the man-made nature of climate change while they repeatedly disclaimed that they are “not scientists.” Republicans adhering to this view ignore the glaring evidence linking human activity to the drastic rise in sea levels, global temperatures, and loss of biodiversity, without proposing alternative causes for these phenomena.

According to a recent poll conducted by the New York Times, over 54% of Americans believe that some part of global warming is caused by human activity–the greatest number in American history to share this belief. The Chicago Council on Global Affairs poll found that more than 70% of Americans believe climate change is an important threat to the interests of the country, and half of the respondents felt that the government needs to do more to curb the its effects.

As suggested by Senator Bernie Sanders from Vermont, most Republicans’ denial of the dangers of climate change is based on political pressure from their supporters in industries that contribute to environmental degradation. Since much of the Republican campaign funds come from some of the largest polluters , it is unlikely that Republican candidates will expressly change their views unless these culprit industries, such as the fossil fuel industry, move away from these damaging processes and adopt sustainable practices. This change may only be had by incentivizing “green” processes such as efficient and renewable energy sources, which is a key aspect of President Obama’s Climate Action Plan. Hence, even though the current House might try to push the Action Plan behind schedule, the nation is heading toward a more sustainable and green future whether James Inhofe and his fellow Republicans are onboard or not.


The Data Dilemma for Cell Phone Carriers: To Throttle or Not to Throttle? FTC Seeks to Answer by Suing AT&T Over Speed Limitations for Wireless Customers

Benjamin Borden, MJLST Staff Member

Connecting to the Internet from a mobile device is an invaluable freedom in the modern age. That essential BuzzFeed quiz, artsy instagram picture, or new request on Friendster are all available in an instant. But suddenly, and often without warning, nothing is loading, everything is buffering, and your once treasured piece of hand-held computing brilliance is no better than a cordless phone. Is it broken? Did the satellites fall from the sky? Did I accidentally pick up my friend’s blackberry? All appropriate questions. The explanation behind these dreadfully slow speeds, however, is more often than not a result of data throttling courtesy of wireless service providers. This phenomenon arises from the use of unlimited data plans on the nation’s largest cell phone carriers. Carriers such as AT&T and Verizon phased out their unlimited data plans in 2010 and 2011, respectively. This came just a few years after requiring unlimited data plans for new smartphone purchases. Wireless companies argue that tiered data plans offer more flexibility and better value for consumers, while others suggest that the refusal to offer unlimited data plans is motivated by a desire to increase revenue by selling to data hungry consumers.

Despite no longer offering unlimited data plans to new customers, AT&T has allowed customers who previously signed up for these plans to continue that service. Verizon also allows users to continue, but refuses to offer discounts on new phones if they keep unlimited plans. Grandfathering these users into unlimited data plans, however, meant that wireless companies had millions of customers able to stream movies, download music, and post to social media without restraint, and more importantly, without a surcharge. Naturally, this was deemed to be too much freedom. So, data throttling was born. Once a user of an unlimited data plan goes over a certain download size, 3-5GB for AT&T in a billable month, their speeds are lowered by 80-90% (to 0.15 mbps in my experience). This speed limit makes even the simplest of smartphone functions an exercise in patience.

I experienced this data throttling firsthand and found myself consistently questioning where my so-called unlimited data had escaped to. Things I took for granted, like using Google Maps to find the closest ice cream shop, were suddenly ordeals taking minutes rather than seconds. Searching Wikipedia to settle that argument with a friend about the plot of Home Alone 4? Minutes. Requesting an Uber? Minutes. Downloading the new Taylor Swift album? Forget about it.

The Federal Trade Commission (FTC) understands this pain and wants to recoup the losses of consumers who were allegedly duped by the promise of unlimited data, only to have their usage capped. As a result, the FTC is suing AT&T for misleading millions of consumers about unlimited data plans. After recently consulting with the Federal Communications Commission (FCC), Verizon decided to abandon its data throttling plans. AT&T and Verizon argue that data throttling is a necessary component of network management. The companies suggest that without throttling, carrier service might become interrupted because of heavy data usage by a small group of customers.
AT&T had the opportunity to settle with the FTC, but indicated that it had done nothing wrong and would fight the case in court. AT&T contends that its wireless service contracts clearly informed consumers of the data throttling policy and those customers still signed up for the service. Furthermore, there are other cellular service options for consumers that are dissatisfied with AT&T’s terms. These arguments are unlikely to provide much solace to wireless customers shackled to dial-up level speeds.
If there is a silver lining though, it is this: with my phone acting as a paperweight, I asked those around me for restaurant recommendations rather than turning to yelp, I got a better understanding of my neighborhood by finding my way rather than following the blue dot on my screen, and didn’t think about looking at my phone when having dinner with someone. I was proud. Part of me even wanted to thank AT&T. The only problem? I couldn’t tweet @ATT to send my thanks.


The Data Dilemma for Cell Phone Carriers: To Throttle or Not to Throttle? FTC Seeks to Answer by Suing AT&T Over Speed Limitations for Wireless Customers

Benjamin Borden, MJLST Staff Member

Connecting to the Internet from a mobile device is an invaluable freedom in the modern age. That essential BuzzFeed quiz, artsy instagram picture, or new request on Friendster are all available in an instant. But suddenly, and often without warning, nothing is loading, everything is buffering, and your once treasured piece of hand-held computing brilliance is no better than a cordless phone. Is it broken? Did the satellites fall from the sky? Did I accidentally pick up my friend’s blackberry? All appropriate questions. The explanation behind these dreadfully slow speeds, however, is more often than not a result of data throttling courtesy of wireless service providers. This phenomenon arises from the use of unlimited data plans on the nation’s largest cell phone carriers. Carriers such as AT&T and Verizon phased out their unlimited data plans in 2010 and 2011, respectively. This came just a few years after requiring unlimited data plans for new smartphone purchases. Wireless companies argue that tiered data plans offer more flexibility and better value for consumers, while others suggest that the refusal to offer unlimited data plans is motivated by a desire to increase revenue by selling to data hungry consumers.

Despite no longer offering unlimited data plans to new customers, AT&T has allowed customers who previously signed up for these plans to continue that service. Verizon also allows users to continue, but refuses to offer discounts on new phones if they keep unlimited plans. Grandfathering these users into unlimited data plans, however, meant that wireless companies had millions of customers able to stream movies, download music, and post to social media without restraint, and more importantly, without a surcharge. Naturally, this was deemed to be too much freedom. So, data throttling was born. Once a user of an unlimited data plan goes over a certain download size, 3-5GB for AT&T in a billable month, their speeds are lowered by 80-90% (to 0.15 mbps in my experience). This speed limit makes even the simplest of smartphone functions an exercise in patience.

I experienced this data throttling firsthand and found myself consistently questioning where my so-called unlimited data had escaped to. Things I took for granted, like using Google Maps to find the closest ice cream shop, were suddenly ordeals taking minutes rather than seconds. Searching Wikipedia to settle that argument with a friend about the plot of Home Alone 4? Minutes. Requesting an Uber? Minutes. Downloading the new Taylor Swift album? Forget about it.

The Federal Trade Commission (FTC) understands this pain and wants to recoup the losses of consumers who were allegedly duped by the promise of unlimited data, only to have their usage capped. As a result, the FTC is suing AT&T for misleading millions of consumers about unlimited data plans. After recently consulting with the Federal Communications Commission (FCC), Verizon decided to abandon its data throttling plans. AT&T and Verizon argue that data throttling is a necessary component of network management. The companies suggest that without throttling, carrier service might become interrupted because of heavy data usage by a small group of customers.
AT&T had the opportunity to settle with the FTC, but indicated that it had done nothing wrong and would fight the case in court. AT&T contends that its wireless service contracts clearly informed consumers of the data throttling policy and those customers still signed up for the service. Furthermore, there are other cellular service options for consumers that are dissatisfied with AT&T’s terms. These arguments are unlikely to provide much solace to wireless customers shackled to dial-up level speeds.
If there is a silver lining though, it is this: with my phone acting as a paperweight, I asked those around me for restaurant recommendations rather than turning to yelp, I got a better understanding of my neighborhood by finding my way rather than following the blue dot on my screen, and didn’t think about looking at my phone when having dinner with someone. I was proud. Part of me even wanted to thank AT&T. The only problem? I couldn’t tweet @ATT to send my thanks.


Open Patenting, Innovation, and the Release of the Tesla Patents

Blake Vettel, MJLST Staff Member

In Volume 14 Issue 2 of the Minnesota Journal of Law, Science & Technology, Mariateresa Maggiolino and Marie Lillá Montagnani proposed a framework for standardized terms and conditions for Open Patenting. This framework set forth a standard system for patent holders to license their patents in order to encourage open innovation, in a way that was easy to administer for patent holders of all sizes. Maggiolino and Montagnani argued for an open patenting scheme in which the patent owner would irrevocably spread their patented knowledge worldwide, based on non-exclusive and no-charge licensing. Futhermore, the licensing system would be centrally operated online and allow the patentee to customize certain clauses in the licensing agreement; while maintaining a few compulsory clauses such as a non-assertion pledge that would keep the license open.

On June 12, 2014 Elon Musk, CEO of Tesla Motors, shocked the business world by announcing via blog post that “Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.” Musk described his reasoning for opening Tesla’s patents for use by others as a way to encourage innovation and growth within the electric car market, and depicted Tesla’s true competition as gasoline cars instead of electric competitors. By allowing use of their patented technology, Tesla hopes to develop the electric car market and encourage innovation. Some commentators have been skeptical about the altruistic motive behind releasing the patents, arguing that it may in fact be a move intended to entice other electric car manufacturers to produce cars that are compatible with Tesla’s patented charging stations in an effort to develop the network of stations around the country.

However, Musk did not unequivocally release these patents; instead he conditioned their subsequent use upon being in “good faith.” What constitutes a good faith use of Tesla’s technology is not clear, but Tesla could have instead opted for a standardized licensing system as proposed by Maggiolino and Montagnani. A clear standardized licensing scheme with compulsory clauses designed to encourage free movement of patented technology and spur innovation may have been more effective in promoting use of Tesla’s patents. An inventor who wants to use Tesla’s patents may be hesitant under Musk’s promise not to initiate lawsuits, where he could be much more confident of his right to use the patented technology under a licensing agreement. The extent to which Tesla’s patents will be used and their effect on the car market and open innovation is yet to be seen, as is the true value of Tesla’s open innovation.


Scientific Responsibility: Why Lawyers Are Imperative in Scientifically Informed Neuro-Ethics

Thomas Hale-Kupiec, MJLST Staff Member

In Volume 11, Issue 1 of the Minnesota Journal of Law, Science, & Technology, Eagleman, et al. conclude that “the neuroscientific community should continue to develop rehabilitative strategies so that the legal community can take advantage of those strategies for a rational, customized approach” in Why Neuroscience Matters for Rational Drug Policy. Though perhaps this assertion is valid in the context of Drug Addiction, I believe it is necessary to limit this assertion to solely rehabilitative drug addiction studies; to allow a further extension of this conclusion would be sociologically detrimental. I postulate that beyond ideas of who we define as a “Neuroscientist,” legal experts need to be at the forefront of this debate in order to better define and formulate ideas of “rehabilitation.”

In a related reflection entitled ‘Smart Drugs’: Do they work? Are they ethical? Will they be legal?, researcher Stephen Rose poses a number of ethical and neurological impacts of mind enhancing substances. The author posits an interesting question: what is “normal” for a brain? If someone undergoes pharmacological manipulation, what should the standard be for “abnormal?” For instance, Rose poses that some substances could be used for patients with Down Syndrome to allow for cognitive enhancement. Is this a valid designation as “abnormal?” Inexorably linked to this issue would be Autism Spectrum Disorder — where on the spectrum does a cognitive “abnormality” manifest? Further, how do we define potentially less visible disorders such as “anxiety?” With this spectrum of diseases and mental health conditions, variety of measured “abnormalities,” and varying pharmacological treatment effectiveness, I think we need to be mindful that neuroscientific constructions are often blurry, but always need to be conceptualized within the paradigm of ethics.

More than ever, the question of “what is abnormal” and “what mandates treatment” needs to be addressed in pharmaceutical policy. For instance, federally designated controlled substances like marijuana may be effective at treating anxiety and other medical conditions. Should the legal community allow for Eagleman’s assertion to snowball? Imagine an increasing number of states embrace evidence that the active ingredients in marijuana could treat certain medical conditions? Should the scientific community solely argue the validity of these findings? Legal professionals, bioethicists, and regulators need to be included in these questions. It is not that the data driven outcomes need to be pursued; rather, that a level of ethics and sociological morals need to be layered above these decisions.


FCC Issues Notice of Proposed Rulemaking to Ensure an Open Internet, Endangers Mid-Size E-Commerce Retailers

Emily Harrison, MJLST Staff

The United States Court of Appeals for the D.C. Circuit twice struck down key provisions of the Federal Communication Commission’s (FCC) orders regarding how to ensure an open Internet. The Commission’s latest articulation is its May 15, 2014 notice of proposed rulemaking, In the Matter of Protecting the Open Internet. According to the proposed rulemaking, it seeks to provide “broadly available, fast and robust Internet as a platform for economic growth, innovation, competition, free expression, and broadband investment and deployment.” The notice of proposed rulemaking includes legal standards previously affirmed by the D.C. Circuit in Verizon v. FCC, 740 F.3d 623 (2014). For example, the FCC relies on Verizon for establishing how the FCC can utilize Section 706 of the Telecommunications Act of 1996 as its source of authority in promulgating Open Internet rules. Additionally, Verizon explained how the FCC can employ a valid “commercially reasonable” standard to monitor the behavior of Internet service providers.

Critics of the FCC’s proposal for network neutrality argue that the proposed standards are insufficient to ensure an open Internet. The proposal arguably allows broadband carriers to offer “paid prioritization” services. The sale of this prioritization not only leads to “fast” and “slow” traffic lanes, but also allows broadband carriers to charge content providers for priority in “allocating the network’s shared resources,” such as the relatively scarce bandwidth between the Internet and an individual broadband subscriber.

Presuming that there is some merit to the critics’ arguments, if Internet Service Providers (ISPs) could charge certain e-commerce websites different rates to access a faster connection to customers, the prioritized websites could gain a competitive advantage in the marketplace. Disadvantaged online retailers could see a relative decrease in their respective revenue. For example, without adequate net neutrality standards, an ISP could prioritize certain websites, such as Amazon or Target, and allow them optimal broadband speeds. Smaller and mid-sized retail stores may only have the capital to access a slower connection. As a result, customers would consistently have a better retail experience on the websites of larger retailers because of the speed in which they can view products or complete transactions. Therefore, insufficient net neutrality policies could potentially have a negative effect on the bottom line of many e-commerce retailers.

Comments can be submitted in response to the FCC’s notice of proposed rulemaking at: http://www.fcc.gov/comments


Self-Driving Vehicles Are Coming

Spencer Peck, RA State and Local Policy Program, MJLST Guest Blogger

Self-driving vehicles are coming, possibly within the decade. But what exactly do drivers, laws and lawmakers, and local economies need to do to prepare for autonomous vehciles? On Friday, October 31 technical, legal, and policy experts will gather at the Humphrey School of Public Affairs to discuss exactly this. More information about the all-day conference, Autonomous Vehicles: The Legal and Policy Road Ahead, is available by following the link.

Self-driving vehicles (SDVs) are the future of automotive transportation. Driverless cars are often discussed as a “disruptive technology” with the ability to transform transportation infrastructure, expand access, and deliver benefits to a variety of users. Some observers estimate limited availability of driverless cars by 2020 with wide availability to the public by 2040. Recent announcements by Google and other major automakers indicate huge potential for development in this area. In fact, an Audi RS7 recently self-piloted around the famous Hockheimring race track. The fully autonomous car reached 150mph and even recorded a lap that was 5 seconds faster than a human competitor! The federal automotive regulator, the National Highway Traffic Safety Administration (NHTSA), issued a policy statement about the potentials of self-driving cars and future regulatory activity in mid-2013. The year 2020 is the most often quoted time frame for the availability of the next level of self-driving vehicles, with wider adoption in 2040-2050. However, there are many obstacles to overcome to make this technology viable, widely available, and permissible. These include developing technology affordable enough for the consumer market, creating a framework to deal with legal and insurance challenges, adapting roadways to vehicle use if necessary, and addressing issues of driver trust and adoption of the new technology. There is even some question as to who will be considered the ‘driver’ in the self-driving realm.

Although self-driving cars are few and far between, the technology is becoming ever-more present and legally accepted. For example, NHTSA requires all newly manufactured cars to have at least a low-level of autonomous vehicle technology. Some scholars even suggest that self-driving vehicles are legal under existing legal frameworks. Five states have some form of legislation expressly allowing self-driving cars or the testing of such vehicles within state boundaries. In fact, two states–California and Nevada–have even issued comprehensive regulations for both private use and testing of self-driving vehicles. Several companies, most notably Google (which drove over 500,000 miles on its original prototype vehicles), are aggressively pursuing the technology and advocating for legal changes in favor of SDVs. Automotive manufacturers from Bosch to Mercedes to Tesla are all pursuing the technology, and frequently provide updates on their self-driving car plans and projects.

The substantial benefits derived from SDVs are hard to ignore. By far the greatest implication referenced by those in the field is related to safety and convenience. NHTSA’s 2008 Crash Causation survey found that close to 90% of crashes are caused by driver mistakes. These mistakes, which include distractions, excessive speed, disobedience of traffic rules or norms, and misjudgment of road conditions, are factors within control of the driver. Roadway capacity improvement often means improvements in throughput, the maximum number of cars per hour per lane on a roadway, but can extend to other capacity concerns. Other hypothesized improvements include fewer necessary lanes due to increased throughput, narrower lanes because of accuracy and driving control of SDVs, and a reduction in infrastructure wear and tear through fewer crashes. While supplemental transportation programs and senior shuttles have provided needed services in recent decades, SDVs have the ability to expand the user base of cars to those who would normally be unable to physically drive. The elderly, disabled, and even children may be beneficiaries.


Is the US Ready for the Next Cyber Terror Attack?

Ian Blodger, MJLST Staff Member

The US’s military intervention against ISIL carries with it a high risk of cyber-terror attacks. The FBI reported that ISIL and other terrorist organizations may turn to cyber attacks against the US in response to the US’s military engagement of ISIL. While no specific targets have been confirmed, likely attacks could result in website defacement to denial of service attacks. Luckily, recent cyber terror attacks attempting to destabilize the US power grid failed, but next time we may not be so lucky. Susan Brenner’s recent article, Cyber-threats and the Limits of Bureaucratic Control, published in the Minnesota Journal of Law Science and Technology volume 14 issue 1, describes the structural reasons for the US’s vulnerabilities to cyber attacks, and offers one possible solution to the problem.

Brenner argues that the traditional methods of investigation do not work well when it comes to cyber attacks. This ineffectiveness results from the obscured origin and often hidden underlying purpose of the attack, both of which are crucial in determining whether a law enforcement or military response is necessary. The impairment leads to problems assessing which agency should control the investigation and response. A nation’s security from external attackers depends, in part, on its ability to present an effective deterrent to would be attackers. In the case of cyber attacks, however, the US’s confusion on which agency should respond often precludes an efficient response.

Brenner argues that these problems are not transitory, but will increase in direct proportion to our reliance on complex technology. The current steps taken by the US are unlikely to solve the issue since they do not address the underlying problem, instead continuing to approach cyber terrorists as conventional attackers. Concluding that top down command structures are unable to respond effectively to the treat of cyber attacks, Brenner suggests a return to a more primitive mode of defense. Rather than trusting the government to ensure the safety of the populace, Brenner suggests citizens should work with the government to ensure their own safety. This decentralized approach, modeled on British town defenses after the fall of the Roman Empire, may avoid the ineffective pitfalls of the bureaucratic approach to cyber security.

There are some issues with this proposed model for cyber security, however. Small British towns during the early middle ages may have been able to ward off attackers through an active citizen based defense, but the anonymity of the internet makes this approach challenging when applied to a digitized battlefield. Small British towns were able to easily identify threats because they knew who lived in the area. The internet, as Brenner concedes, makes it difficult to determine to whom any given person pays allegiance. Presumably, Brenner theorizes that individuals would simply respond to attacks on their own information, or enlist the help of others to fed off attacks. However, the anonymity of the internet would mean utter chaos in bolstering a collective defense. For example, an ISIL cyber terrorist could likely organize a collective US citizen response against a passive target by claiming they were attacked. Likewise, groups utilizing pre-emptive attacks against cyber terrorist organizations could be disrupted by other US groups that do not recognize the pre-emptive cyber strike as a defensive measure. This simply shows that the analogy between the defenses of a primitive British town and the Internet is not complete.

Brenner may argue that her alternative simply calls for current individuals, corporations, and groups to build up their own defenses and protect themselves from impending cyber threats. While this approach would avoid the problems inherent in a bureaucratic approach, it ignores the fact that these groups are unable to protect themselves currently. Shifting these groups’ understanding of their responsibility of self defense may spur innovation and increase investment in cyber protection, but this will likely be insufficient to stop a determined cyber attack. Large corporations like Apple, JPMorgan, Target, and others often hemorrhage confidential information as a result of cyber attacks, even though they have large financial incentives to protect that information. This suggests that an individualized approach to cyber protection would also likely fail.

With the threat of ISIL increasing, it is time for the United States to take additional steps to reduce the threat of a cyber terror attack. At this initial stage, the inefficiencies of bureaucratic action will result in a delayed response to large-scale cyber terror attacks. While allowing private citizens to band together for their own protection may have some advantages over government inefficiency, this too likely would not solve all cyber security problems.


The Benefits of Juries

Steven Groschen, MJLST Staff Member

Nearly 180 years ago Alexis de Tocqueville postulated that jury duty was beneficial to those who participated. In an often quoted passage of Democracy in America he stated that “I do not know whether the jury is useful to those who have lawsuits, but I am certain it is highly beneficial to those who judge them.” Since that time many commentators, including the United States Supreme Court, have echoed this belief. Although this position possesses a strong intuitive appeal, it is necessary to ask whether there is any evidentiary basis to support it. Up until recently, the scientific evidence on the effects of serving on a jury was scant. Fortunately for proponents of the jury system, the research of John Gastil is building a scientific basis for the positive effects of jury duty.

One of Gastil’s most extensive studies focused on finding a correlation between serving on a jury and subsequent voting patterns. For purposes of the study, Gastil and his colleagues compiled a large sample of jurors from various counties–8 total–across the United States. Next, the research team gathered voting records for jurors in the sample–examining each juror’s voting patterns five years prior and subsequent to serving on a jury. Finally, regression analyses were performed on the data and some interesting effects were discovered. Individuals who were infrequent voters prior to serving as a juror on a criminal trial were 4-7% more likely to vote after serving. Interestingly, this effect held for the group of previously infrequent voters regardless of the verdict reached in the criminal trials they served on. Further, for hung juries the effect held and was even stronger.

Despite these findings, the jury is still out on whether the scientific evidence is substantial enough to support the historically asserted benefits of jury duty. More evidence is certainly needed, however, important policy questions regarding jury duty are already implicated. As researchers begin correlating jury participation with more aspects of civic life, there remains a possibility that negative effects of serving on a jury may be discovered. Would such findings serve as a rationale for modifying the jury selection process in order to preclude those who might be negatively affected? More importantly, do further findings of positive effects suggest more protections are needed during the voir dire process to ensure certain classes are not excluded from serving on a jury and thus receiving those benefits?