I’m Not a Doctor, But…: E-Health Records Issues for Attorneys

Ke Huang, MJLST Lead Articles Editor

The Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act) generally provides that, by 2015, healthcare providers must comply with the Act’s electronic health record (EHR) benchmarks, or, the government would reduce these providers’ Medicare payments by one percent.

These provisions of the HITECH Act are more than a health policy footnote. Especially for attorneys, the growing use of EHRs raises several legal issues. Indeed, in Volume 10, Issue 1 of the Minnesota Journal of Law, Science & Technology, published six years ago, Kari Bomash analyzes the consequence of EHRs in three legal-related aspects. In Privacy and Public Health in the Information Age, Bomash discusses how a Minnesota Health Records Act amendment relates to: (1) privacy, especially consent of patients, (2) data security (Bomash was almost prescient given the growing security concerns), and (3) data use regulations that affect medical doctors.

Bomash’s discussion is not exhaustive. EHRs also raise legal issues running the gamut of intellectual property, e-discovery, to malpractice. Given that software runs EHRs, IP industry is very much implicated. So much so that some proponents of EHR even support open source. (Another MJLST Article explains the concept of open source.)

E-discovery may be more straightforward. Like other legal parties maintaining electronic stored information, health entities storing EHR must comply with court laws governing discovery.

And malpractice? One doctor suggested in a recent Wall Street Journal op-ed that EHR interferes with a doctor’s quality of care. Since quality of care, or lack thereof, is correlated with malpractice actions, commentators raised the concern that EHR could raise malpractice actions. A 2010 New England Journal of Medicine study addressed this topic but could not provide a conclusive answer.

Even my personal experience with EHRs is one of the reasons that lead me to want to become an attorney. As a child growing up in an immigrant community, I often accompanied adult immigrants, to interpret in contract closings, small-business transactions, and even clinic visits. Helping in those matters sparked my interest in law. In one of the clinic visits, I noticed that an EHR print-out of my female cousin stated that she was male. I explained the error to her.

“I suppose you have to ask them to change it, then,” she said.

I did. I learned from talking to the clinic administrator the EHR software was programmed to recognize female names, and, for names that were ambiguous, as was my cousin’s, the software automatically categorized the patient as male. Even if my cousin’s visit was for an ob-gyn check-up.


Stuck in Between a Rock and a Genomic Hard Place

Will Orlady, MJLST Staff Member

In Privatizing Biomedical Citizenship: Risk, Duty, and Potential in the Circle of Pharmaceutical Life, Professor Jonathan Khan wrote: “genomic research is at an impasse.” Though genomic research has advanced incrementally since the completion of the first draft of the human genome, Khan asserts, “few of the grandest promises of genomics have materialized.” This apparent lack of progress is a complex issue. Further, one may be left asking whether, within the current economic and regulatory scheme, genomics actually has promising answers to give. But Khan’s work cites to biomedical researchers, claiming that what is needed to propel genomic research forward is simple: more bodies.

Indeed, it is a simple answer, but to which question–or questions? Khan’s article explores the “interconnections among five . . . federally sponsored biomedical initiatives of the past decade in order to illuminate critical aspects of the current drive to get bodies.” To be sure, the article provides the literature with a fine starting analysis of public biomedical programs, synthesizing much of the previous research on biomedical research participation. It further evaluates previously proposed methods for increasing genomic research participation. Khan’s article, however, left me with more questions than answers. If the public and private sectors cannot work together to produce results, then who is left to ensure progress? Is progress currently feasible? Are we being too hasty and impatient demanding results from an admittedly young scientific discipline? And, ultimately, if study participants/subjects are expected to participate with their own genetic material or bodies, what do they get in return?

Khan’s article attempts to address the final question. That is, if we are to create a legal or social obligation to contribute to genomic research for the sake of the public, what benefit (or, at the least, what safety assurance) do contributors receive in return for their contribution? Clearly, issues associated with creating a system of duties while providing no corresponding rights are aplenty. Underlying this discussion is the notion that to ensure the timely progress of genomic research mandated participation in such research might be necessary. Herein lies a problem: “[t]hese duties effectively privatize citizenship, recasting service to the political community as a function of service to [an] . . . enterprise of biomedical research. . . . ” What is more, Khan is keen to point out that time and time again, promises of genomic advancement in the hands of collaborating private and public entities have failed to produce promised results.

If we are to go forward privatizing citizenship, creating duties for persons to use their bodies for the benefit of society, we must be careful to ensure that (1) individual rights in the outcome of the research are secured; and, (2) that society will in fact benefit from the collectively imposed obligations.

Although Khan’s article leaves many questions unanswered, I empathize with his weariness of creating a public duty to contribute to biomedical research. Solutions to such complex issues are not easily answered. Torpid genomic research is troubling. But, so is the notion of privatized citizenship ascribing duties without granting corresponding rights. Though more bodies may be needed to further the timely advance genomic research, policymakers academics alike should be cautious creating any programs which compromise the integrity of personal privacy for the sake of public advancement without granting corresponding rights.


“Precision Medicine” or Privacy Pitfalls? Ethical Considerations Related to the Proposed Health Database

Thomas Hale-Kupiec, MJLST Staff Member

President Barack Obama proposed spending $215 million on a ‘precision medicine’ initiative. The largest part of the money, $130 million, would go to the National Institutes of Health in order to create a population-scale study. This study would create a database containing health information with genetic, environmental, lifestyle, medical and microbial data from both healthy and sick volunteers with the aim that it will be used to accelerate medical research and to personalize treatments to patients. Though some would call this a “bio-bank,” Francis Collins, director of the National Institutes of Health, said that instead, the project is greater than that, as it is combining data from among what he called more than 200 large American health studies that are ongoing and together involve at least two million people. “Fortunately, we don’t have to start from scratch,” he said. “The challenge of this initiative is to link those together. It’s more a distributed approach than centralized.” Further, the President immediately attempted to alleviate concerns related to privacy: “We’re going to make sure that protecting patient privacy is built into our efforts from Day 1. . . I’m proud we have so many patients-rights advocates with us here today. They’re not going to be on the sidelines. This is not going to be an afterthought. They’ll help us design this initiative from the ground up, making sure that we harness the new technologies and opportunities in a responsible way.”

Three major issues seem to be implicated in this proposed database study. First, both informed consent and incidental findings seem to be problematic in this model. When ascertaining information from the American health studies, the government may be bypassing what users initially consented to when agreeing to participate in the study. Further, incidental findings and individual research results of potential health, reproductive, or personal importance to individual contributors are implicated in these studies; these aspects need to be considered in order to avoid any liability going forward, and provide participates with expectations of how their information may be used. Second, collection and retention of this information seem to be an issue. Questions on when, where, and how long this information is being held creates a vast array of privacy concerns. Further, security of this information may be implicated, as some of this data may be personal. Third, deletion or removal of this information may be an issue if the program ever becomes discontinued, or if users are allowed to opt-out. Options after closure include destroying the specimens, transferring them to another facility, or letting them sit unused in freezers. These raise a multitude of questions about what to do with specimens and when level of consent should be implicated.

Overall, this database seems to hold immeasurable potential for the future of medicine. This said, legal and ethical considerations must be considered during of this new policy’s development and implementation; with this immeasurable power comes great responsibility.


Revisiting the Idea of a Patent Small Claims Court

Comi Sharif, Managing Editor

In 2009, Robert P. Greenspoon explored the idea of adjusting the patent court system to improve efficiency for the adjudication of small-scale claims. His article, Is the United States Finally Ready for a Patent Small Claims Court?, appearing in Volume 10 Issue 2 of the Minnesota Journal of Law, Science & Technology, pointed out the deterrent-like effect that high transaction costs involved with traditional patent litigation have on inventors trying to protect their intellectual property. Greenspoon argues that if patent holders are merely trying to recover small sums from infringers, the lengthy and expensive patent litigation system currently in effect often outweighs the remedies available through litigation. As a result, Greenspoon suggests the creation of a “Patent Small Claims Court” to resolve these issues. Seeing that it’s been over five years since Greenspoon’s article, it makes sense to reexamine this topic and identify the some of the recent developments related to the article.

In May of 2012, the USPTO and United States Copyright Office co-sponsored a roundtable discussion to consider the possible introduction of small claims courts for patent and copyright claims. A few months later, The USPTO held another forum focused solely on patent small claims proceedings. A major emphasis of these discussions was conformity of the new court with the U.S. Constitution (an issue addressed by Greenspoon in his article). In December of 2012 the USPTO published a questionnaire to seek feedback from the public on the idea of a patent small claims court. The focus of the survey involved matters relating to subject matter jurisdiction, venue, case management, appellate review, and available remedies. See this link for the official request and list of questions from the USPTO submitted in the Federal Register. The deadline for submitting responses has since passed, but the results of the survey are still unclear.

In Greenspoon’s article, he addresses a few of the unsuccessful past attempts to create a small claims patent court. In 2013, the House of Representative passed a bill, which authorized further study into the idea of developing a pilot program for patent small claims procedures in certain judicial districts. See H.R. 3309, 113th Cong. (2013). Senate did not pass the bill, however, so no further progress occurred.

Overall, though there appears to be continued interest in creating a patent small claims system, it doesn’t seem likely that one will be created in the near future. The idea is far from dead, though, and perhaps some of Greenspoon’s proposals can still help influence a change. Stay tuned.


Postmortem Privacy: What Happens to Online Accounts After Death?

Steven Groschen, MJLST Staff Member

Facebook recently announced a new policy that grants users the option of appointing an executor of their account. This policy change means that an individual’s Facebook account can continue to exist after the original creator has passed. Although Facebook status updates from “beyond the grave” is certainly a peculiar phenomenon, it fits nicely into the larger debate of how to handle one’s digital assets after their death.

Rebecca G. Cummings, in her article The Case Against Access to Decedents’ Email: Password Protection as an Exercise of the Right to Destroy, discusses some of the arguments for and against providing access to a decedent’s online account. Those favoring access to a decedent’s account may assert one of two rationales: (1) access eases administrative burdens for personal representatives of estates; and (2) digital accounts are merely property to be passed on to one’s descendants. The response from those disagreeing with access is that the intent of the deceased should be honored above other considerations. Further they argue that if there is no clear intent from the deceased (which is not uncommon because many Americans die without wills), then the presumption should be that the decedent’s online accounts were intended to remain private.

Email and other online accounts (e.g. Facebook, Twitter, dating profiles) present novel problems for property rights of the deceased. Historically, a diary or the occasional love letter were among the most intimate property that could be transferred to one’s descendants. The vast catalogs of information available in an email account drastically changes what is available to be passed on. In contrast to a diary, an email account contains far more than the highlights of an individual’s day — emails provide a detailed account of an individual’s daily tasks and communications. Interestingly, this in-depth cataloging of daily activities has led some to the argument that information should be passed on as a way of creating a historical archive. There is certainly historical value in preserving an individual’s social media or email accounts, however, it must be balanced against the potential invasion of his or her privacy.

As of June 2013, seven states have passed laws that explicitly govern digital assets after death. However, the latest development in this area is the Uniform Fiduciary Access to Digital Access Act, which was created by the Uniform Law Commission. This act attempts to create consistency among the various states on how digital assets are handled after an individual’s death. Presently, the act is being considered for enactment in fourteen states. The act grants fiduciaries in certain instances the “same right to access those [digital] assets as the account holder, but only for the limited purpose of carrying out their fiduciary duties.” Whether or not this act will satisfy both parties in this debate remains to be seen.


Recent Developments Affecting the “Fracking” Industry

Neal Rasmussen, MJLST Staff Member

In “Notes from Underground: Hydraulic Fracturing in the Marcellus Shale” from Volume 12, Issue 2 of the Minnesota Journal of Law, Science & Technology, Joseph Dammel discussed the then current state of hydraulic fracturing (“fracking”) and offered various “proposals that protect public concerns and bolster private interests.” Since publication of this Note in 2011, there have been major changes in the hydraulic fracturing industry as more states and cities begin to question if the reward is worth the risk.

Since 2011, required disclosures of the fluids used in fracking have become effective in fourteen additional states, increasing the overall number of states that require disclosures to twenty. While required disclosures have alleviated some concerns, many believe this is not enough and have pushed to ban fracking outright. Vermont was the first state to do so in 2012. Although progressive, the ban was more symbolic as Vermont contains no major natural gas deposits. However, in late 2014 New York governor Andrew Cuomo made a landmark decision by announcing that fracking would be banned within New York State. Many cities have begun to pass bans as well, including Denton Texas, right in the heart of oil and natural gas country. Citing concerns about the potential health risks associated with the activity, Florida could be the next state to join the anti-fracking movement. In late 2014, two Florida senators introduced a bill that sought to ban all fracking activities and a state representative introduced a similar bill in the beginning of 2015.

The bans have not been without controversy. The fracking industry has challenged many of the local bans arguing the bans are pre-empted by state laws and exceed the cities authority. After Denton passed its local ban, the Texas Oil & Gas Association filed an injunction arguing the city did not have authority to implement such a ban. It yet to be seen if the injunction will be successful but if the results in Colorado are any indication, where local fracking bans have been overturned due to state preemption, the fracking industry should be confident. Until or unless there is a major federal decision on fracking regulations, the fracking industry will be required to juggle the various state and local regulations, which are becoming less friendly as fracking becomes more controversial nationwide.


Privacy in the Workplace and Wearable Technology

Jessica Ford, MJLST Staff Member

Lisa M. Durham Taylor’s article, The Times They Are a-Changin’: Shifting Norms and Employee Privacy in the Technological Era, in Volume 15 Issue 2 of the Minnesota Journal of Law, Science & Technology discusses employee workplace privacy rights in regard to new technologies. Taylor spends much of the article focusing on privacy concerns surrounding correspondence in the workplace. Taylor states that in certain cases, employees may be able to expect their personal email account correspondence to be private as seen in the 2008 case Pure Bower Boot Camp, Inc. v. Warrior Fitness Boot Camp, LLC. However, generally employers can legally monitor email messages and any websites an employee visits, including personal accounts.

Since Taylor’s article, new technologies have emerged, bringing new privacy implications for the workplace with them. Wearable technologies such as Google Glass, smart watches, and fitness bands find themselves in a legal void, particularly in regard to privacy concerns. Several workplaces have implemented Google Glass through Google’s Glass at Work program. While this could help productivity, especially in medical settings, it could also mean that an employer could review every recorded moment, even those containing personal conversations or experiences.

Smart watches could also have a troubling future due to the lack of legal boundaries. At the moment, it would be simple for a company to require employees to wear GPS-enabled smart watches and use the watches to track employees’ locations, see if an employee is exceeding his break time, and instantaneously communicate with employees. Such uses could be frustrating, if not invasive. All messages and activities also could be tracked outside of the office, essentially eliminating any semblance of personal privacy. Additionally, as Taylor notes in her article, there is case precedent upholding a “public employer’s search of text messages sent from and received on the employee’s employer-issued paging device.” This 2010 case, City of Ontario v. Quon, further allowed the employer to search personal messages.

For the moment, it appears that employers are erring on the side of caution. It will take some time to see whether the legal framework Taylor discusses will be applied to wearable technologies and whether it will be more permissive or restrictive for employers.


Could Changes for NEPA Be on the Horizon

Allison Kvien, MJLST Staff Member

The National Environmental Policy Act (NEPA) was one of the first broad, national environmental protection statutes ever written. NEPA’s aim is to ensure that agencies give proper consideration to the environment prior to taking any major federal action that significantly affects the environment. NEPA requires agencies to prepare Environmental Impact Statements (EISs) and Environmental Assessments (EAs) for these projects. NEPA is often criticized for its inability to be effective in the courts for environmental plaintiffs looking for review of federal agency actions. Environmental petitioners who have brought NEPA issues before the Supreme Court have never won.

The Court has never reversed a lower court ruling on the ground that the lower court failed to apply NEPA with sufficient rigor. Indeed, as described at the outset, the Court has not even once granted review to consider the possibility that a lower court erred in that direction and then heard the case on the merits. The Court has instead reviewed cases only when NEPA plaintiffs won below, and then the Court has reversed, typically unanimously.

Because environmental plaintiffs have never won before the Supreme Court on a NEPA issue, many view the statute as a weak tool and have wanted to strengthen or overhaul NEPA.

According to a recent report from the Environmental Law Reporter, President Obama is now “leaning on NEPA” for the work he hopes to accomplish in improving the permitting process for infrastructure development, but it does not look like he is working to improve NEPA itself,

The president’s initiative has identified a number of permitting improvements, but it does not include a serious effort to force multiple agencies to align their permitting processes. A key to forcing multiple agencies to work together on project reviews and approvals is found in an unlikely place: NEPA. The statute is overdue for a makeover that will strengthen how it identifies and analyzes environmental impacts for federal decisionmakers. In doing so, it can provide the framework that will require multiple agencies to act as one when reviewing large projects.

Though Obama’s proposal may not address improvements for NEPA itself, could it help those who have long wished to give NEPA an overhaul? This is not the first time in the last couple years that the President has talked about using NEPA. In March 2013, Bloomberg released news that Obama was, “preparing to tell all federal agencies for the first time that they should consider the impact on global warming before approving major projects, from pipelines to highways.” With NEPA being key to some of President Obama’s initiatives, could there be more political capital to address some changes for NEPA that have been long-wanted? There might be some hope for NEPA just yet.


Admission of Scientific Evidence in Criminal Case Under the Daubert Standard

Sen “Alex” Wang, MJLST Staff Member

In Crawford v. Washington, the Supreme Court, in a unanimous decision, overruled its earlier decision in Ohio v. Roberts by rejecting the admission of the out-of-court testimony due to its nature as “testimonial” evidence. However, it was not clear if the constitutional right of confrontation only applied to traditional witnesses (like the statement in Crawford) or if it also applied to scientific evidence and experts. Subsequently, the Court clarified this point in Melendez-Diaz v. Massachusetts and Bullcoming v. New Mexico, where the Court upheld the confrontation right of the defendants to cross-examine the analysts who performed the scientific tests. However, compare to traditional testimony from eyewitnesses, scientific evidence (e.g., blood alcohol measurement, field breathalyzer, genetic testing) is a relatively new development in criminal law. The advancement of modern technologies creates a new question, namely whether this evidence would be sufficiently reliable to avoid triggering the Confrontation Clause.

This question is discussed in a student note & comment titled The Admission of Scientific Evidence in a Post-Crawford World in Volume 14, Issue 2 of the Minnesota Journal of Law, Science & Technology. The author Eric Nielson pointed out that the ongoing dispute in the Court about requiring analysts to testify before admitting scientific findings missed the mark. Specifically, scientific evidence, especially the result of an analytical test is an objective, not subjective, determination. In the courtroom, testimony of a scientific witness is mainly based on review of the content of the witness’s report, not his memories. Thus, according to the author, though Justice Scalia’s boldly statements in Crawford that “reliability is an amorphous, if not entirely subjective, concept[,]” may be right in the context of traditional witness, it is clearly wrong in the realm of science where reliability is a measurable quantity. In particular, the author suggested that scientific evidence should be admitted under the standard articulated by the Court in Daubert v. Dow.

As emphasized by the author, a well-drafted, technical report should answer all of the questions that would be asked of the analyst. Given that there is currently no national or widely-accepted set of standards for forensic science written reports or testimony, the author proposed the following key components to be included in a scientific report conforming to the Daubert standard: 1) sample identifier, including any identifier(s) assigned to the sample during analysis; 2) documentation of sample receipt and chain of custody; 3) analyst’s name; 4) analyst’s credentials; 5) evidence of analyst’s certification or qualification to perform the specific test; 6) laboratory’s certification; 7) testing method, either referencing an established standard (e.g., ASTM E2224 – 10 Standard Guide for Forensic Analysis of Fibers by Infrared Spectroscopy) or a copy of the method if it is not publicly available; 8) evidence of the effectiveness and reliability of the method, either from peer reviewed journals, method certification, or internal validation testing; 9) results of testing, including the results of all standards or controls run as part of the testing; 10) copies of all results, figures, graphs, etc; 11) copy of the calibration log or certificate for any equipment used; 12) any observations, deviations, and variances, or an affirmative statement that none were observed; 13) analyst’s statement that all this information is true, correct, and complete to the best of their knowledge; 14) analyst’s statement that the information is consistent with various hearsay exceptions; 15) evidence of second-party review, generally a supervisor or qualified peer; 16) posting a copy to a publicly maintained database; 17) notifying the authorizing entity via email of the completion of the work and the location of the posting.

Per the author, because scientific evidence is especially probative, the current refusal to demand evidence of reliability, method validation, and scientific consensus has allowed shoddy work and practices to impersonate dependable science in the courts. This is an injustice to the innocent and the guilty alike.


Mechanics or Manipulation: Regulation of High Frequency Trading Since the “Flash Crash” and a Proposal for a Preventative Approach

Dan Keith, MJLST Staff Member

In May of 2010, the DOW Jones plummeted to Depression levels and recovered within a half an hour. The disturbing part? No one knew why.

An investigation by the Securities Exchange Commission (SEC) and the Commodity Futures trading Commission (CTFC) determined that, in complicated terms, the Flash Crash involved “a rapid automated sale of 75,000 E-mini S&P 500 June 2010 stock index futures contracts (worth about $4.1 billion) over an extremely short time period created a large order imbalance that overwhelmed the small risk-bearing capacity of financial intermediaries–that is, the high-frequency traders and market makers.” After about 10 minutes of purchasing the E-mini, High Frequency Traders (HFTs) began selling this same instrument rapidly to deplete its own reserves which had overflowed. This unloading came at a time when liquidity was already low, meaning this rapid and aggressive selling increased the downward spiral. As a result of this volatility and overflowing inventory of the E-mini, HFTs were passing contracts back in forth in a game of financial “hot potato.”

In simpler terms, on this day in May of 2010, a number of HFT algorithms had “glitched”, generating a feedback loop that caused stock prices to spiral and skyrocket.

This event put High Frequency Trading on the map, for both the public and regulators. The SEC and the CTFC have responded with significant legislation meant to curb the mechanistic risks that left the stock market vulnerable in the spring of 2010. Those regulations include new reporting systems like the Consolidated Audit Trail (CAT) that is supposed to allow regulators to track HFT activity by the data it produces as it comes in. Furthermore, Regulation Systems Compliance Integrity (Reg SCI), a regulation still being negotiated into its final form, would require that HFTs and other eligible financial groups “carefully design, develop, test, maintain, and surveil systems that are integral to their operations. Such market participants would be required to ensure their core technology meets certain standards, conduct business continuity testing, and provide certain notifications in the event of systems disruptions and other events.”

While these regulations are appropriate for the mechanistic failures of HFT activity, regulators have largely overlooked an aspect of High Frequency Trading that deserves more attention–nefarious, manipulative HFT practices. These come in the form of either “human decisions” or “nefarious” mechanisms built into the algorithms that animate High Frequency Trading. “Spoofing”, “smoking”, or “stuffing”–there are different names, with small variations, but each of these activities involves a form of making large orders for stock and quickly cancelling or withdrawing those orders in order to create false market data.

Regulators have responded with “deterrent”-style legislation that outlaws this type of activity. Regulators and lawmakers have yet, however, to introduce regulations that would truly “prevent” as opposed to simply “deter” these types of activities. Plans for truly preventative regulations can be modeled on current practices and existing regulations. A regulation of this kind only requires the right framework to make it truly effective as a preventative measure, stopping “Flash Crash” type events before they can occur.