February 2015

Revisiting the Idea of a Patent Small Claims Court

Comi Sharif, Managing Editor

In 2009, Robert P. Greenspoon explored the idea of adjusting the patent court system to improve efficiency for the adjudication of small-scale claims. His article, Is the United States Finally Ready for a Patent Small Claims Court?, appearing in Volume 10 Issue 2 of the Minnesota Journal of Law, Science & Technology, pointed out the deterrent-like effect that high transaction costs involved with traditional patent litigation have on inventors trying to protect their intellectual property. Greenspoon argues that if patent holders are merely trying to recover small sums from infringers, the lengthy and expensive patent litigation system currently in effect often outweighs the remedies available through litigation. As a result, Greenspoon suggests the creation of a “Patent Small Claims Court” to resolve these issues. Seeing that it’s been over five years since Greenspoon’s article, it makes sense to reexamine this topic and identify the some of the recent developments related to the article.

In May of 2012, the USPTO and United States Copyright Office co-sponsored a roundtable discussion to consider the possible introduction of small claims courts for patent and copyright claims. A few months later, The USPTO held another forum focused solely on patent small claims proceedings. A major emphasis of these discussions was conformity of the new court with the U.S. Constitution (an issue addressed by Greenspoon in his article). In December of 2012 the USPTO published a questionnaire to seek feedback from the public on the idea of a patent small claims court. The focus of the survey involved matters relating to subject matter jurisdiction, venue, case management, appellate review, and available remedies. See this link for the official request and list of questions from the USPTO submitted in the Federal Register. The deadline for submitting responses has since passed, but the results of the survey are still unclear.

In Greenspoon’s article, he addresses a few of the unsuccessful past attempts to create a small claims patent court. In 2013, the House of Representative passed a bill, which authorized further study into the idea of developing a pilot program for patent small claims procedures in certain judicial districts. See H.R. 3309, 113th Cong. (2013). Senate did not pass the bill, however, so no further progress occurred.

Overall, though there appears to be continued interest in creating a patent small claims system, it doesn’t seem likely that one will be created in the near future. The idea is far from dead, though, and perhaps some of Greenspoon’s proposals can still help influence a change. Stay tuned.


Postmortem Privacy: What Happens to Online Accounts After Death?

Steven Groschen, MJLST Staff Member

Facebook recently announced a new policy that grants users the option of appointing an executor of their account. This policy change means that an individual’s Facebook account can continue to exist after the original creator has passed. Although Facebook status updates from “beyond the grave” is certainly a peculiar phenomenon, it fits nicely into the larger debate of how to handle one’s digital assets after their death.

Rebecca G. Cummings, in her article The Case Against Access to Decedents’ Email: Password Protection as an Exercise of the Right to Destroy, discusses some of the arguments for and against providing access to a decedent’s online account. Those favoring access to a decedent’s account may assert one of two rationales: (1) access eases administrative burdens for personal representatives of estates; and (2) digital accounts are merely property to be passed on to one’s descendants. The response from those disagreeing with access is that the intent of the deceased should be honored above other considerations. Further they argue that if there is no clear intent from the deceased (which is not uncommon because many Americans die without wills), then the presumption should be that the decedent’s online accounts were intended to remain private.

Email and other online accounts (e.g. Facebook, Twitter, dating profiles) present novel problems for property rights of the deceased. Historically, a diary or the occasional love letter were among the most intimate property that could be transferred to one’s descendants. The vast catalogs of information available in an email account drastically changes what is available to be passed on. In contrast to a diary, an email account contains far more than the highlights of an individual’s day — emails provide a detailed account of an individual’s daily tasks and communications. Interestingly, this in-depth cataloging of daily activities has led some to the argument that information should be passed on as a way of creating a historical archive. There is certainly historical value in preserving an individual’s social media or email accounts, however, it must be balanced against the potential invasion of his or her privacy.

As of June 2013, seven states have passed laws that explicitly govern digital assets after death. However, the latest development in this area is the Uniform Fiduciary Access to Digital Access Act, which was created by the Uniform Law Commission. This act attempts to create consistency among the various states on how digital assets are handled after an individual’s death. Presently, the act is being considered for enactment in fourteen states. The act grants fiduciaries in certain instances the “same right to access those [digital] assets as the account holder, but only for the limited purpose of carrying out their fiduciary duties.” Whether or not this act will satisfy both parties in this debate remains to be seen.


Recent Developments Affecting the “Fracking” Industry

Neal Rasmussen, MJLST Staff Member

In “Notes from Underground: Hydraulic Fracturing in the Marcellus Shale” from Volume 12, Issue 2 of the Minnesota Journal of Law, Science & Technology, Joseph Dammel discussed the then current state of hydraulic fracturing (“fracking”) and offered various “proposals that protect public concerns and bolster private interests.” Since publication of this Note in 2011, there have been major changes in the hydraulic fracturing industry as more states and cities begin to question if the reward is worth the risk.

Since 2011, required disclosures of the fluids used in fracking have become effective in fourteen additional states, increasing the overall number of states that require disclosures to twenty. While required disclosures have alleviated some concerns, many believe this is not enough and have pushed to ban fracking outright. Vermont was the first state to do so in 2012. Although progressive, the ban was more symbolic as Vermont contains no major natural gas deposits. However, in late 2014 New York governor Andrew Cuomo made a landmark decision by announcing that fracking would be banned within New York State. Many cities have begun to pass bans as well, including Denton Texas, right in the heart of oil and natural gas country. Citing concerns about the potential health risks associated with the activity, Florida could be the next state to join the anti-fracking movement. In late 2014, two Florida senators introduced a bill that sought to ban all fracking activities and a state representative introduced a similar bill in the beginning of 2015.

The bans have not been without controversy. The fracking industry has challenged many of the local bans arguing the bans are pre-empted by state laws and exceed the cities authority. After Denton passed its local ban, the Texas Oil & Gas Association filed an injunction arguing the city did not have authority to implement such a ban. It yet to be seen if the injunction will be successful but if the results in Colorado are any indication, where local fracking bans have been overturned due to state preemption, the fracking industry should be confident. Until or unless there is a major federal decision on fracking regulations, the fracking industry will be required to juggle the various state and local regulations, which are becoming less friendly as fracking becomes more controversial nationwide.


Privacy in the Workplace and Wearable Technology

Jessica Ford, MJLST Staff Member

Lisa M. Durham Taylor’s article, The Times They Are a-Changin’: Shifting Norms and Employee Privacy in the Technological Era, in Volume 15 Issue 2 of the Minnesota Journal of Law, Science & Technology discusses employee workplace privacy rights in regard to new technologies. Taylor spends much of the article focusing on privacy concerns surrounding correspondence in the workplace. Taylor states that in certain cases, employees may be able to expect their personal email account correspondence to be private as seen in the 2008 case Pure Bower Boot Camp, Inc. v. Warrior Fitness Boot Camp, LLC. However, generally employers can legally monitor email messages and any websites an employee visits, including personal accounts.

Since Taylor’s article, new technologies have emerged, bringing new privacy implications for the workplace with them. Wearable technologies such as Google Glass, smart watches, and fitness bands find themselves in a legal void, particularly in regard to privacy concerns. Several workplaces have implemented Google Glass through Google’s Glass at Work program. While this could help productivity, especially in medical settings, it could also mean that an employer could review every recorded moment, even those containing personal conversations or experiences.

Smart watches could also have a troubling future due to the lack of legal boundaries. At the moment, it would be simple for a company to require employees to wear GPS-enabled smart watches and use the watches to track employees’ locations, see if an employee is exceeding his break time, and instantaneously communicate with employees. Such uses could be frustrating, if not invasive. All messages and activities also could be tracked outside of the office, essentially eliminating any semblance of personal privacy. Additionally, as Taylor notes in her article, there is case precedent upholding a “public employer’s search of text messages sent from and received on the employee’s employer-issued paging device.” This 2010 case, City of Ontario v. Quon, further allowed the employer to search personal messages.

For the moment, it appears that employers are erring on the side of caution. It will take some time to see whether the legal framework Taylor discusses will be applied to wearable technologies and whether it will be more permissive or restrictive for employers.


Could Changes for NEPA Be on the Horizon

Allison Kvien, MJLST Staff Member

The National Environmental Policy Act (NEPA) was one of the first broad, national environmental protection statutes ever written. NEPA’s aim is to ensure that agencies give proper consideration to the environment prior to taking any major federal action that significantly affects the environment. NEPA requires agencies to prepare Environmental Impact Statements (EISs) and Environmental Assessments (EAs) for these projects. NEPA is often criticized for its inability to be effective in the courts for environmental plaintiffs looking for review of federal agency actions. Environmental petitioners who have brought NEPA issues before the Supreme Court have never won.

The Court has never reversed a lower court ruling on the ground that the lower court failed to apply NEPA with sufficient rigor. Indeed, as described at the outset, the Court has not even once granted review to consider the possibility that a lower court erred in that direction and then heard the case on the merits. The Court has instead reviewed cases only when NEPA plaintiffs won below, and then the Court has reversed, typically unanimously.

Because environmental plaintiffs have never won before the Supreme Court on a NEPA issue, many view the statute as a weak tool and have wanted to strengthen or overhaul NEPA.

According to a recent report from the Environmental Law Reporter, President Obama is now “leaning on NEPA” for the work he hopes to accomplish in improving the permitting process for infrastructure development, but it does not look like he is working to improve NEPA itself,

The president’s initiative has identified a number of permitting improvements, but it does not include a serious effort to force multiple agencies to align their permitting processes. A key to forcing multiple agencies to work together on project reviews and approvals is found in an unlikely place: NEPA. The statute is overdue for a makeover that will strengthen how it identifies and analyzes environmental impacts for federal decisionmakers. In doing so, it can provide the framework that will require multiple agencies to act as one when reviewing large projects.

Though Obama’s proposal may not address improvements for NEPA itself, could it help those who have long wished to give NEPA an overhaul? This is not the first time in the last couple years that the President has talked about using NEPA. In March 2013, Bloomberg released news that Obama was, “preparing to tell all federal agencies for the first time that they should consider the impact on global warming before approving major projects, from pipelines to highways.” With NEPA being key to some of President Obama’s initiatives, could there be more political capital to address some changes for NEPA that have been long-wanted? There might be some hope for NEPA just yet.


Admission of Scientific Evidence in Criminal Case Under the Daubert Standard

Sen “Alex” Wang, MJLST Staff Member

In Crawford v. Washington, the Supreme Court, in a unanimous decision, overruled its earlier decision in Ohio v. Roberts by rejecting the admission of the out-of-court testimony due to its nature as “testimonial” evidence. However, it was not clear if the constitutional right of confrontation only applied to traditional witnesses (like the statement in Crawford) or if it also applied to scientific evidence and experts. Subsequently, the Court clarified this point in Melendez-Diaz v. Massachusetts and Bullcoming v. New Mexico, where the Court upheld the confrontation right of the defendants to cross-examine the analysts who performed the scientific tests. However, compare to traditional testimony from eyewitnesses, scientific evidence (e.g., blood alcohol measurement, field breathalyzer, genetic testing) is a relatively new development in criminal law. The advancement of modern technologies creates a new question, namely whether this evidence would be sufficiently reliable to avoid triggering the Confrontation Clause.

This question is discussed in a student note & comment titled The Admission of Scientific Evidence in a Post-Crawford World in Volume 14, Issue 2 of the Minnesota Journal of Law, Science & Technology. The author Eric Nielson pointed out that the ongoing dispute in the Court about requiring analysts to testify before admitting scientific findings missed the mark. Specifically, scientific evidence, especially the result of an analytical test is an objective, not subjective, determination. In the courtroom, testimony of a scientific witness is mainly based on review of the content of the witness’s report, not his memories. Thus, according to the author, though Justice Scalia’s boldly statements in Crawford that “reliability is an amorphous, if not entirely subjective, concept[,]” may be right in the context of traditional witness, it is clearly wrong in the realm of science where reliability is a measurable quantity. In particular, the author suggested that scientific evidence should be admitted under the standard articulated by the Court in Daubert v. Dow.

As emphasized by the author, a well-drafted, technical report should answer all of the questions that would be asked of the analyst. Given that there is currently no national or widely-accepted set of standards for forensic science written reports or testimony, the author proposed the following key components to be included in a scientific report conforming to the Daubert standard: 1) sample identifier, including any identifier(s) assigned to the sample during analysis; 2) documentation of sample receipt and chain of custody; 3) analyst’s name; 4) analyst’s credentials; 5) evidence of analyst’s certification or qualification to perform the specific test; 6) laboratory’s certification; 7) testing method, either referencing an established standard (e.g., ASTM E2224 – 10 Standard Guide for Forensic Analysis of Fibers by Infrared Spectroscopy) or a copy of the method if it is not publicly available; 8) evidence of the effectiveness and reliability of the method, either from peer reviewed journals, method certification, or internal validation testing; 9) results of testing, including the results of all standards or controls run as part of the testing; 10) copies of all results, figures, graphs, etc; 11) copy of the calibration log or certificate for any equipment used; 12) any observations, deviations, and variances, or an affirmative statement that none were observed; 13) analyst’s statement that all this information is true, correct, and complete to the best of their knowledge; 14) analyst’s statement that the information is consistent with various hearsay exceptions; 15) evidence of second-party review, generally a supervisor or qualified peer; 16) posting a copy to a publicly maintained database; 17) notifying the authorizing entity via email of the completion of the work and the location of the posting.

Per the author, because scientific evidence is especially probative, the current refusal to demand evidence of reliability, method validation, and scientific consensus has allowed shoddy work and practices to impersonate dependable science in the courts. This is an injustice to the innocent and the guilty alike.


Mechanics or Manipulation: Regulation of High Frequency Trading Since the “Flash Crash” and a Proposal for a Preventative Approach

Dan Keith, MJLST Staff Member

In May of 2010, the DOW Jones plummeted to Depression levels and recovered within a half an hour. The disturbing part? No one knew why.

An investigation by the Securities Exchange Commission (SEC) and the Commodity Futures trading Commission (CTFC) determined that, in complicated terms, the Flash Crash involved “a rapid automated sale of 75,000 E-mini S&P 500 June 2010 stock index futures contracts (worth about $4.1 billion) over an extremely short time period created a large order imbalance that overwhelmed the small risk-bearing capacity of financial intermediaries–that is, the high-frequency traders and market makers.” After about 10 minutes of purchasing the E-mini, High Frequency Traders (HFTs) began selling this same instrument rapidly to deplete its own reserves which had overflowed. This unloading came at a time when liquidity was already low, meaning this rapid and aggressive selling increased the downward spiral. As a result of this volatility and overflowing inventory of the E-mini, HFTs were passing contracts back in forth in a game of financial “hot potato.”

In simpler terms, on this day in May of 2010, a number of HFT algorithms had “glitched”, generating a feedback loop that caused stock prices to spiral and skyrocket.

This event put High Frequency Trading on the map, for both the public and regulators. The SEC and the CTFC have responded with significant legislation meant to curb the mechanistic risks that left the stock market vulnerable in the spring of 2010. Those regulations include new reporting systems like the Consolidated Audit Trail (CAT) that is supposed to allow regulators to track HFT activity by the data it produces as it comes in. Furthermore, Regulation Systems Compliance Integrity (Reg SCI), a regulation still being negotiated into its final form, would require that HFTs and other eligible financial groups “carefully design, develop, test, maintain, and surveil systems that are integral to their operations. Such market participants would be required to ensure their core technology meets certain standards, conduct business continuity testing, and provide certain notifications in the event of systems disruptions and other events.”

While these regulations are appropriate for the mechanistic failures of HFT activity, regulators have largely overlooked an aspect of High Frequency Trading that deserves more attention–nefarious, manipulative HFT practices. These come in the form of either “human decisions” or “nefarious” mechanisms built into the algorithms that animate High Frequency Trading. “Spoofing”, “smoking”, or “stuffing”–there are different names, with small variations, but each of these activities involves a form of making large orders for stock and quickly cancelling or withdrawing those orders in order to create false market data.

Regulators have responded with “deterrent”-style legislation that outlaws this type of activity. Regulators and lawmakers have yet, however, to introduce regulations that would truly “prevent” as opposed to simply “deter” these types of activities. Plans for truly preventative regulations can be modeled on current practices and existing regulations. A regulation of this kind only requires the right framework to make it truly effective as a preventative measure, stopping “Flash Crash” type events before they can occur.