Articles by mjlst

The Best Process for the Best Evidence

Mary Riverso, MJLST Staffer

Social networking sites are now an integral part of American society. Almost everyone and everything has a profile, typically on multiple platforms. And people like to use them. Companies like having direct contact with their customers, media outlets like having access to viewer opinions, and people like to document their personal lives.

However, as the use of social-networking continues to increase in scope, the information placed in the public sphere is playing an increasingly centralized role in investigations and litigation. Many police departments conduct regular surveillance of public social media posts in their communities because these sites have become conduits for crimes and other wrongful behavior. As a result, litigants increasingly seek to offer records of statements made on social media sites as evidence. So how exactly can content from social media be used as evidence? Ira Robbins explores this issue in her article Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-Networking Evidence. The main hurdle is one of reliability. In order to be admitted as evidence, the source of information must be authentic so that a fact-finder may rely on the source and ultimately its content as trustworthy and accurate. However, social media sites are particularly susceptible to forgery, hacking, and alterations. Without a confession, it is often difficult to determine who is the actual author responsible for posting the content.

Courts grapple with this issue – some allow social media evidence only when the record establishes distinctive characteristics of the particular website under Federal Rule of Evidence 901(b)(4), other courts believe authentication is a relatively low bar and as long as the witness testifies to the process by which the record was obtained, then it is ultimately for the jury to determine the credibility of the content. But is that fair? If evidence is supposed to assist the fact-finder in “ascertaining the truth and securing a just determination,” should it not be of utmost importance to determine the author of the content? Is not a main purpose of authentication to attribute the content to the proper author? Social media records may well be the best evidence against a defendant, but without an authorship-centric approach, the current path to their admissibility may not yet be the best process.


A Slow Government’s Response to High-Frequency Trading

Nolan Hudalla, MJLST Staffer

High-frequency trading (HFT) is the use of enhanced technological speed to gain an edge in trading financial instruments. This edge over other investors is often only 1/100th of the blink of an eye, but can provide a company with years of unwavering success. Although HFT became significant within the past decade because of its positive economic advantages, the recent discussion of HFT is becoming increasingly negative. A major reason for this shift in opinion about HFT is due to the increased awareness of unethical trading practices after the 2008 financial crisis.

MJLST published an article last year on the ethics of HFT. In that article, The Law and Ethics of High-Frequency Trading, Steven McNamara advanced various reasons why certain HFT practices violate both business ethics and federal agency regulations. But where do Congress and the SEC stand, and what have they done to correct such unethical practices in HFT?

It appears that the federal government is taking a middling approach to fighting unethical HFT practices. In particular, the SEC has not taken a hard stance on many HFT issues, and Congress has followed suit by not passing any bills in 2016 to fight HFT. However, it is also evident that the SEC and Congress are troubled by ongoing bad practices and are aware of the risk of future violations. Specifically, Congress has introduced several bills “imposing a tax on a broad array of financial transactions that could impact HFT . . . [and] also held hearings in the 114th Congress touching on HFT issues as part of its oversight of the SEC and CFTC.” In addition, the SEC has increased enforcement on serious HFT ethics violations. The agency also approved a new public stock exchange in June with a “speed bump” capable of deterring some HFT activity.


Has GoPro’s Voluntary “Karma” Refund Program Revealed a Gap in Regulatory Jurisdiction over Commercial and Private Drones?

Joey Novak, MJLST Staffer

Drones in the year 2016 are involved in everything from assisting law enforcement to recordings at weddings and sporting events to even the potential for package delivery, and as such, have been rapidly expanding further into recreational and commercial settings. Drones also have one of the most imaginably widespread liability palates you could think of, as 4th Amendment, privacy, property, and products liability issues all combine to form the Frankenstein’s monster of liability, that is if that monster was also subject to 152 pages of operational FAA regulation because he could fly.

With such a wide breadth of hot topic liability issues, it’s not surprising that what should be the most common issue for commercial use has been somewhat overlooked: product liability.  On November 8th, GoPro announced the “recall” of 2,500 Karma drones after the $800 drone had only been on the market for two weeks. Apparently, the design of an off-center camera placement led to increased vibration, leading to connectivity issues and in turn, drones unexpectedly falling out of the sky. Although no actual injuries have been reported, one does not have to make a large leap to imagine a falling drone leading to injury and subsequent liability issues.

The interesting thing about this “recall” is that it revealed a regulatory gap between the FAA (Federal Aviation Administration) and the CPSC (Consumer Product Safety Commission) for drone product liability. With the FAA taking over regulation of drones with their Part 107 regulations released in June of this year, a CPSC spokesperson has stated simply that “[w]e do not have jurisdiction over drones.” But while the FAA does regulate manufacturing of larger aircraft through a certificate process, its oversight of drones to this point has been restricted to operational issues, not the classic manufacturing or design defects that lie at the heart of products liability. Both agencies ended up “recommending” that GoPro proceed with their refund program, and GoPro has stated that they are working “in close coordination” with both agencies. However, GoPro was not actually required to report to either agency or participate in any government-mandated recall program.

Now with drones falling out of the sky, GoPro was greatly self-incentivized to get their products off of the market to avoid what would be pretty cut-and-dry liability in the event that any injuries actually did occur. But what if a potential issue with drones was not so obviously open to liability? Commercial drone companies could unilaterally decide to keep their products on the market if they determine that whatever injury that is occurring may, for example, be more of a result of user error rather than a classic manufacturing or design defect. Companies would then take their chances with potential suits, and the absence of an agency-mandated reporting and recall program could actually assist companies in their defense, as companies would only need to fulfill their post-sale duty to warn about the product’s dangers rather than recall the product entirely.

Restatement (Third) of Torts: Products Liability § 11 imposes liability for failure to recall pursuant to a governmental directive, but in the absence of such a government-mandated requirement a company can only be liable in recall if they decide to voluntarily recall the product and are negligent in doing so. This governmental requirement stems from the thought that, as the Michigan Supreme Court puts it, “the duty to repair or recall is more properly a consideration for administrative agencies and the Legislature.” In fact, as comment c. to the Restatement states, “voluntary recalls are typically undertaken in the anticipation that a government agency will require one anyway.”

If no government agency is requiring recall or repair for drones, companies are presumably left to make the counter-policy determination of whether the cost of potential liability from public injury outweighs the costs associated with repair or recall. While such a determination may require more than this cost-benefit vacuum (such as shareholder relations, consumer goodwill, future sales & outlook, etc.), government-mandated recall programs are put in place to prevent companies from having to weigh costs against public safety. GoPro certainly did the “right thing” here by swiftly engaging in a voluntary refund program (maybe they just wanted some good “Karm- ah forget it), but look for Congress to clarify agency jurisdiction over drone recalls in the near future to protect recreational and commercial drone producers against themselves.


How Yuge Will Trump’s Influence Be on United States Science?

Daniel Baum, MJLST Staffer

Science was only a minute fragment of the candidates’ campaigns, but many researchers have expressed fears about Trump. “Trump will be the first anti-science president we have ever had,” Michael Lubell, director of public affairs for the American Physical Society, told Nature. “The consequences are going to be very, very severe.” How severe, and which kinds of science will Trump influence?

One science topic that was explicitly discussed in the campaigns was climate change. Trump has long denied climate change, and as Trump turned to the Republican Party’s conservative base, he said that his administration will focus on “real environmental challenges, not phony ones.” However, Trump has expressed support for economically beneficial climate change research: he told Science Debate that “[p]erhaps we should be focused on developing energy sources and power production that alleviates the need for dependence on fossil fuels” and specified that those energy sources worth developing include wind, solar, nuclear, and bio-fuels.

Trump has also taken the Republican Party’s businessman’s approach to space and public health research. For space research, Trump thinks that we should seek global partners and would like to expand the role of the commercial space industry in the US space program. Discussing public health research, Trump told conservative radio host Michael Savage, “I hear so much about the NIH, and it’s terrible.” Trump told Science Debate that instead of giving the NIH all the funding it needs, “efforts to support research and public health initiatives will have to be balanced with other scarce resources” by Congress, where the Republicans now control both houses.

In order to do good science, the United States needs the best researchers. However, Trump’s strong anti-immigration stance may dissuade foreign scientists from coming to or staying in the United States to do research—why should a highly skilled researcher come to or stay in the U.S. if he or she will have to do research in an environment hostile to immigrants? With fewer noncitizen scientists, we’ll need to train our own scientists with great science education. Unfortunately, Trump has expressed essentially anti-education policies. He argues that some colleges and universities should bear the burden of students’ loan debt and that the federal government should stop making money off student loans. Trump also wants to pull federal funding from the Department of Education, or demolish it altogether, and make management of public education at the state and local level while removing federal funding for low-income public schools.

Overall, Trump will change science in the United States bigly. If he sticks to the points he made on the campaign trail, the United States will have fewer scientists, and they will mostly only receive federal funding to do research on things that the Republican Party thinks will make Americans money. That could include the development of new environmentally friendly energy sources, but most likely not space or public health research. But there is still hope: this change will only be so yuge if Trump sticks exactly to what he said while campaigning. Already, less than a week after being elected, Trump has backpedaled on his rabid anti-Obamacare stance, and maybe he’ll realize that the best way to make America great again is to make Americans and American science great again.


The Future Is Solar: Investing in Solar Energy Using Sale Leasebacks

Alan Morales, MJLST Staffer

Solar energy has come a long way in the last few decades as the cost of producing photovoltaic (PV) cells, the main technology used in converting sunlight into electricity, has significantly decreased. Furthermore, there is a federal tax credit program available, which allows investors in solar energy to claim 30 percent of their solar energy installation cost as a credit on their taxes. This has led residential, commercial and industrial property owners to slowly increase their solar usage.

However solar developers, in many cases, will not have enough tax liability to make immediate use of the tax benefits. An essential financing mechanism for solar developers is a “tax equity” transaction, where tax benefits are sold to raise capital to build the solar project. This demand for cash, has caught the attention of private equity firms, pension funds and foreign investors.

To start, these cash investors must invest through a “blocker” corporation – a US entity treated as a corporation for tax purpose. Cash investors should understand how the tax equity works since they will be investing alongside it. It will also affect what the cash investor can get out of the deal. Then a cash investor might use a sale- leaseback to finance the project. Sale-leasebacks are common in the commercial and industrial rooftop and utility-scale solar markets. In a sale-leaseback, the developer sells a project to a tax equity investor for its fair market value and then the investor leases it back to the developer. In this case, the investor keeps all of the tax benefits, and receives cash in the form of rent from the developer. The developer has taxable gain on the sale to the extent the value of the property exceeds what it cost to build. Although a lessor position is not ideal for some cash investors, it can prove beneficial if they can purchase the project, lease it back to the developer, and sell a portion of the lease to a tax equity investor.

The main benefit to a cash equity investor is the flexibility. Cash investors are in a position to sell as much of its lease position as it wants, and retain as much cash flow as it wants. Sale-leasebacks are enticing for developers because it offers financing for the project while freeing up cash for their other business needs. The tax equity investor is least benefited and would have to become a member of the lessor before the asset is placed in service, which means having to take on some degree of construction risk.


6th Circuit Aligns With 7th Circuit on Data Breach Standing Issue

John Biglow, MJLST Managing Editor

To bring a suit in any judicial court in the United States, an individual, or group of individuals must satisfy Article III’s standing requirement. As recently clarified by the Supreme Court in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), to meet this requirement, a “plaintiff must have (1) suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.” Id. at 1547. When cases involving data breaches have entered the Federal Circuit courts, there has been some disagreement as to whether the risk of future harm from data breaches, and the costs spent to prevent this harm, qualify as “injuries in fact,” Article III’s first prong.

Last Spring, I wrote a note concerning Article III standing in data breach litigation in which I highlighted the Federal Circuit split on the issue and argued that the reasoning of the 7th Circuit court in Remijas v. Neiman Marcus Group, LLC, 794 F.3d 688 (7th Cir. 2015) was superior to its sister courts and made for better law. In Remijas, the plaintiffs were a class of individuals whose credit and debit card information had been stolen when Neiman Marcus Group, LLC experienced a data breach. A portion of the class had not yet experienced any fraudulent charges on their accounts and were asserting Article III standing based upon the risk of future harm and the time and money spent mitigating this risk. In holding that these Plaintiffs had satisfied Article III’s injury in fact requirement, the court made a critical inference that when a hacker steals a consumer’s private information, “[p]resumably, the purpose of the hack is, sooner or later, to make fraudulent charges or assume [the] consumers’ identit[y].” Id. at 693.

This inference is in stark contrast to the line of reasoning engaged in by the 3rd Circuit in Reilly v. Ceridian Corp. 664 F.3d 38 (3rd Cir. 2011).  The facts of Reilly were similar to Remijas, except that in Reilly, Ceridian Corp., the company that had experienced the data breach, stated only that their firewall had been breached and that their customers’ information may have been stolen. In my note, mentioned supra, I argued that this difference in facts was not enough to wholly distinguish the two cases and overcome a circuit split, in part due to the Reilly court’s characterization of the risk of future harm. The Reilly court found that the risk of misuse of information was highly attenuated, reasoning that whether the Plaintiffs experience an injury depended on a series of “if’s,” including “if the hacker read, copied, and understood the hacked information, and if the hacker attempts to use the information, and if he does so successfully.” Id. at 43 (emphasis in original).

Often in the law, we are faced with an imperfect or incomplete set of facts. Any time an individual’s intent is an issue in a case, this is a certainty. When faced with these situations, lawyers have long utilized inferences to differentiate between more likely and less likely scenarios for what the missing facts are. In the case of a data breach, it is almost always the case that both parties will have little to no knowledge of the intent, capabilities, or plans of the hacker. However, it seems to me that there is room for reasonable inferences to be made about these facts. When a hacker is sophisticated enough to breach a company’s defenses and access data, it makes sense to assume they are sophisticated enough to utilize that data. Further, because there is risk involved in executing a data breach, because it is illegal, it makes sense to assume that the hacker seeks to gain from this act. Thus, as between the Reilly and Remijas courts’ characterizations of the likelihood of misuse of data, it seemed to me that the better rule is to assume that the hacker is able to utilize the data and plans to do so in the future. Further, if there are facts tending to show that this inference is wrong, it is much more likely at the pleading stage that the Defendant Corporation would be in possession of this information than the Plaintiff(s).

Since Remijas, there have been two data breach cases that have made it to the Federal Circuit courts on the issue of Article III standing. In Lewert v. P.F. Chang’s China Bistro, Inc., 819 F.3d 963, 965 (7th Cir. 2016), the court unsurprisingly followed the precedent set forth in their recent case, Remijas, in finding that Article III standing was properly alleged. In Galaria v. Nationwide Mut. Ins. Co., a recent 6th Circuit case, the court had to make an Article III ruling without the constraint of an earlier ruling in their Circuit, leaving the court open to choose what rule and reasoning to apply. Galaria v. Nationwide Mut. Ins. Co., No. 15-3386, 2016 WL 4728027, (6th Cir. Sept. 12, 2016). In the case, the Plaintiffs alleged, among other claims, negligence and bailment; these claims were dismissed by the district court for lack of Article III standing. In alleging that they had suffered an injury in fact, the Plaintiffs alleged “a substantial risk of harm, coupled with reasonably incurred mitigation costs.” Id. at 3. In holding that this was sufficient to establish Article III standing at the pleading stage, the Galaria court found the inference made by the Remijas court to be persuasive, stating that “[w]here a data breach targets personal information, a reasonable inference can be drawn that the hackers will use the victims’ data for the fraudulent purposes alleged in Plaintiffs’ complaints.” Moving forward, it will be intriguing to watch how Federal Circuits who have not faced this issue, like the 6th circuit before deciding Galaria, rule on this issue and whether, if the 3rd Circuit keeps its current reasoning, this issue will eventually make its way to the Supreme Court of the United States.


Navigating the Future of Self-Driving Car Insurance Coverage

Nathan Vanderlaan, MJLST Staffer

Autonomous vehicle technology is not new to the automotive industry. For the most part however, most of these technologies have been incorporated as back-up measures for when human error leads to poor driving. For instance, car manufactures have offered packages that incorporate features such as blind-spot monitoring, forward-collision warnings with automatic breaking, as well as lane-departure warnings and prevention. However, the recent push by companies like Google, Uber, Tesla, Ford and Volvo are making the possibility of fully autonomous vehicles a near-future reality.

Autonomous vehicles will arguably be the next greatest technology, that will be responsible for saving countless lives. According to alertdriving.com, over 90 percent of accidents are the result of human error. By taking human error out of the driving equation, The Atlantic estimates that the full implementation of automated cars could save up to 300,000 lives a decade in the United States alone. In a show of federal support, U.S. Transportation Secretary Anthony Foxx released an update in January 2016 to the National Highway Traffic Safety Administration’s (NHTSA) stance on Autonomous Vehicles, promulgating a set of 15 standards to be followed by car manufactures in developing such technologies. Further, in March 2016, the NHSTA promised $3.9 billion dollars in funding over 10 years to “support the development and adoption of safe vehicle automation.” As the world makes the push for fully autonomous vehicles, the insurance industry will have to respond to the changing nature of vehicular transportation.

One of the companies leading the innovative charge is Tesla. New Tesla models may now come equipped with an “autopilot” feature. This feature incorporates multiple external sensors that relay real-time data to a computer that navigates the vehicle in most highway situations.  It allows the car to slow down when it encounters obstacles, as well as change lanes when necessary. Elon Musk, Tesla’s CEO estimates that the autopilot feature is able to reduce Tesla driver accidents by as much as 50 percent. Still, the system is not without issue. This past June, a user of the autopilot system was killed when his car collided with a tractor trailer that the car’s sensors failed to detect. Tesla quickly distributed a software operating system that he claims would have been able to detect the trailer. The accident has quickly prompted the discussion of how insurance claims and coverage will adapt to accidents in which the owners of a vehicle are no longer cause of such accidents.

Auto Insurance is a state regulated industry. Currently, there are two significant insurance models: no-fault concepts, and the tort system. While each state system has many differences, each model has the same over-arching structure. No-fault insurance models require the insurer to pay parties injured in an accident regardless of fault. Under the tort system, the insurer of the party who is responsible for the accident foots the bill. Under both systems however, the majority of insurance premium costs are derived from personal liability coverage. A significant portion of insurance coverage structure is premised on the notion that drivers cause accidents. But when the driver is taken out of the equation, the basic concept behind automotive insurance changes.

 

What seems to be the most logical response to the implementation of fully-autonomous vehicles is to hold the manufacture liable. Whenever a car crashes that is engaged in a self-driving feature, it can be presumed that the crash was caused by a manufacturing defect. The injured party would then instigate a products-liability action to recover for damages suffered during the accident. Yet this system ignores some important realities. One such reality is that manufactures will likely place the new cost on the consumer in the purchase price of the car. These costs could leave a car outside the average consumer’s price range, and could hinder the wide-spread implementation of a safer automotive alternative to human-driven cars. Even if manufactures don’t rely on consumers to cover the bill, the new system will likely require new forms of regulation to protect car manufactures from going under due to overwhelming judgments in the courts.

Perhaps a more effective method of insurance coverage has been proposed by RAND, a company that specializes in evaluating and suggesting how best to utilize new technologies. RAND has suggested that a universal no-fault system be implemented for autonomous vehicle owners. Under such a system, autonomous car drivers would still pay premiums, but such premiums would be significantly lower as accident rates decrease. It is likely that for this system to work, regulation would have to come from the federal level to insure the policy is followed universally in the United States. One such company that has begun a system mirroring this philosophy is Adrian Flux in Britain. This insurer offers a plan for drivers of semi-autonomous vehicles that is lower in price than traditional insurance plans. Adrian Flux has also announced that it would update its policies as both the liability debate and driverless technology evolves.

No matter the route chosen by regulators or insurance companies, the issue of autonomous car insurance likely won’t arise until 2020 when Volvo plans to place commercial, fully-autonomous vehicles on the market. Even still, it could be decades before a majority of vehicles on the street have such capabilities. This time will give regulators, insurers, and manufactures alike, adequate time to develop a system that will best propel our nation towards a safer, autonomous automotive society.


Drinking the Kool-Aid? Why We Might Want to Talk About Our Road Salt

Nick Redmond, MJLST Staffer

Winter is coming. Or—at least according to the 2017 Farmer’s Almanac“winter is back” after an exceptionally mild 2015–2016 season, and with it comes all of the shoveling, the snow-blowing, and the white walkers de-icing of slippery roads that we missed last year. So what does the most overused Game of Thrones quote and everyone’s least favorite season have to do with Kool-Aid (actually, Flavor-Aid)? Just like the origins of the phrase “drinking the Kool-Aid,” this post has to do with cyanide. More specifically, the ferrocyanide compounds that we use to coat our road salt and that are potentially contaminating our groundwater.

De-icing chemicals are commonly regarded as the most efficient and effective means of keeping our roads safe and free from ice in the winter. De-icing compounds come in many forms, from solids to slurries to sticky beet juice- or cheese brine-based liquids. The most common de-icing chemical is salt, with cities like Minneapolis spending millions of dollars to purchase upwards of 15,000 tons of treated and untreated salt to spray on their roads. In order to keep the solid salt from clumping or “caking” and becoming unusable as it sits around it’s usually treated with chemicals to ensure that it can be spread evenly on roads. Ferrocyanide (a/k/a hexacyanoferrate(II)) and the compounds sodium ferrocyanide and potassium ferrocyanide are yellow chemicals commonly used as anti-caking additives for road salt in Minnesota and other parts of the country, and they can be found in varying concentrations depending on the product, from 0.0003 ppm to 0.33 ppm. To put those numbers in perspective, the CDC warns that cyanide starts to produce harmful effects on humans at 0.05 mg/dL, or 0.5 ppm.

But why are chemicals on our road salt troubling? Road salt keeps ice from forming a bond with the pavement by lowering the freezing point of snow as it falls on the ground. As the salt gets wet it dissolves; it and the chemicals that may be attached to it have to go somewhere, which may be our surface and ground waters or the air if the liquids evaporate. The introduction of these chemicals into groundwater is of particular concern for the 75% of Minnesotans and people like them who rely on groundwater sources for drinking water. The potential for harm arises when ferrocyanide compounds are exposed to light and rapidly decompose, yielding free cyanide (CN and HCN). Further, as waters contaminated with cyanide are chlorinated and introduced to acids they may produce the harmful compound cyanogen chloride, a highly toxic gas that was once considered for use in chemical warfare. Taking into account the enormous amount of salt used and stored each year, even small concentrations may add up over time. And although the EPA has placed cyanide on the Clean Water Act’s list of toxic substances, the fact that road salt is a non-point source means that it’s entirely up to states and municipalities to decide how they want to regulate it.

The good news is that ferrocyanides are among the least toxic cyanide salts, and tend not to release toxic free cyanide. What’s more, the concentrations of ferrocyanide on road salt are generally quite low, are spread out over large areas, and are even further diluted by precipitation, evaporation, and existing ground and surface water. In order to really affect drinking water the ferrocyanide has to (1) not evaporate into the air, (2) make its way through soil and into aquifers, and (3) in large enough concentrations to actually harm humans, something that can be difficult for a large molecule. Despite all of this, however, the fact that Minneapolis alone is dumping more than 15,000 tons of road salt each year, some of it laced with ferrocyanide, should give us pause. That’s the same weight as 15,000 polar bears being released in the city streets every year! Most importantly, these compounds seep into our garden soil, stick to our car tires and our boots, and soak the fur of our pets and wild animals. While cyanide on road salt certainly isn’t a significant public health risk right now, being a part of local conversations to explore and encourage alternatives (and there are a number of alternatives) to prevent future harm might be something to consider.

At the very least think twice about eating snow off the ground (if you weren’t already). Especially the yellow stuff.


Digital Health and Legal Aid: The Lawyer Will Skype You Now

Angela Fralish, MJLST Invited Blogger

According to Dr. Shirley Musich’s research article: Homebound Older Adults: Prevalence, Characteristics, Health Care Utilization and Quality of Care, homebound patients are among the top 5% of medical service users with persistently high expenses. As it stands, about 3.6 million homebound Americans are in need of continuous medical care, but with the cost of healthcare rising, the number of elderly people retiring, hospitals closing in increasing numbers and physician shortages anticipated, caring for the homebound is becoming expensive and impractical. In an article titled Care of the Chronically Ill at Home: An Unresolved Dilemma in Health Policy for the United States, author Karen Buhler-Wilkerson notes that even after two centuries of various experiments to deliver and finance home health care, there are still too many unresolved issues.

One potential solution could be at the crossroads of technology, medicine and law. Telemedicine is a well-known medical technology providing cost effective medical care for the homebound. Becker’s reports that telemedicine visits are often more affordable, and access is a very important component, both in the sense of enabling patients to communicate through a smartphone, and the ability for clinicians to reach patients at a distance, particularly those for whom travel to a hospital on a weekly basis for necessary follow-ups or check-ins would be costly and is not feasible. Telemedicine is a form of affordable technology reaching homebound patients.

Legal aid organizations are also beginning to integrate virtual services for the homebound. For example, at Illinois Legal Aid Online, clients are able to have a live consultation with a legal professional, and in Maryland, a virtual courthouse is used for alternative dispute resolution proceedings. Some states, such as Alaska and New York, have advocated for virtual consults and hearings as part of a best practices model. On September 22nd of this year, the ABA launched a free virtual legal advice clinic to operate as an online version of a walk in clinic. However, despite these responsive measures, virtual technology for legal aid is expensive and burdensome.

But what about the cancer patient who can’t get out of bed to come in for a legal aid appointment, but needs help with a disability claim to pay their medical bills? Could diversifying telehealth user interfaces help cure the accessibility gap for both medicine and law?

Some organizations have already begun collaborations to address these issues. Medical Legal Partnerships work together to provide comprehensive care through cost effective resource pooling of business funds and federal and corporate grant money. Partnerships resolve the sociolegal determinants impacting the health of a patient. One classic case example is the homebound patient with aggravated asthma living in a house with mold spores.  A lawyer works to get the housing up to code, which reduces the asthma, and consequently future medical costs. Lawyers resolve the economic factors perpetuating a health condition while physicians treat it biologically. These partnerships are being implemented nationwide because of their proven results in decreasing the cost of care. In the case of telehealth, the homebound asthmatic patient, could log on to their computer, or work through an app on their phone, to show the attorney the living conditions in high resolution, in addition to receiving medical treatment.

The government seems to be favorable to these resolutions. The Health Resources and Services Administration allocated $18 million to health center collaborations seeking to improve quality care through health information technology. Further, the FDA has created the Digital Health program to encourage and foster collaborations in technologies to promote public health. Last year alone, Congress awarded $4 million to the Legal Services Corporation, who then disbursed that money among 15 legal aid organizations, many of which “will use technology to connect low-income populations to resources and services.” Telehealth innovation is a cornerstone for medical and legal professions committed to improvements in low cost quality patient care, especially for the homebound.

Medical facilities could even extend this same technology profitably by offering patients an in-house “attorney consult” service to improve quality of care. Much like the invention of the convenient cordless phone, a telehealth phone could be used in house or outpatient to give a health organization a leading market edge in addition to decreasing costs. Technology has yet to fully develop the number of ways that telehealth can be used to deliver legal services to improve healthcare.

So if there is a multidisciplinary call for digital aid, why aren’t we seeing more of it on a daily basis? For one, the regulatory law landscape may cause confusion. The FDA governs medical devices, the FTC regulates PHI data breaches and the FCC governs devices using broadcast services or electromagnetic spectrum. Telehealth touches on all of these and results in jurisdictional overlap amongst regulatory agencies. Other reasons may involve resistance to new technology and ever-evolving legislation and policies. In Teladoc, Inc., v. Texas Medical Board, a standard of care issue was raised when the medical board issued an injunction for physicians who prescribed medicine, but had not yet seen the patient in person. One physician in the case stated that without telehealth, his homebound patient would receive no treatment. Transitioning from traditional in person consultations to virtual assistance can greatly improve the health of patient, but has brought an entourage of notable concerns.

Allegedly, the use of telehealth was first executed by Alexander Graham Bell in 1876 when he made a phone call to his doctor. Over 140 years later, this technology is used by NASA for outer space health consults. While the technology is still relatively new, especially for collaborative patient treatment by doctors and lawyers, used wisely, it can be an interdisciplinary collaborative renaissance in using technology to improve healthcare systems and patient lives.

From all perspectives, virtual aid is well funded future component of both the medical and legal fields. It can be used in the legal sense to help people in need, in the business sense as an ancillary convenience service generating profits, or in the medical sense to provide care for the homebound. The trick will be to find engineers who can secure multiuse interfaces while meeting federal regulations and public demand. Only time will tell if such a tool can be efficiently developed.


Haiti, Hurricanes and Holes in Disaster Law

Amy Johns, MJLST Staffer

The state of national disaster relief is one that depends greatly on the country and that country’s funds. Ryan S. Keller’s article, “Keeping Disaster Human: Empathy, Systematization, and the Law,” argues that proposed legal changes to the natural disaster laws (both national and international) could have negative consequences for the donative funding of disaster relief. In essence, he describes a potential trade–off: do we want to risk losing the money that makes disaster relief possible, for the sake of more effectively designating and defining disasters? These calculations are particularly critical for countries that rely heavily on foreign aid to recover after national disasters.

In light of recent tragedies, I would point to a related difficulty: what happens when the money is provided, but because of a lack of accountability or governing laws, the funds never actually make it to their intended purposes? Drumming up financial support is all well and good, but what if the impact is never made because there are no legal and institutional supports in place?

Keller brings up a common reason to improve disaster relief law: “efforts to better systematize disaster may also better coordinate communication procedures and guidelines.” There is a fundamental difficulty in disaster work when organizations don’t know exactly what they are supposed to be doing. A prime example of the lack of communication and guidelines has been seen in Haiti, in which disaster relief efforts are largely dependent on foreign aid. The fallout from Hurricane Matthew has resurrected critiques of the 2010 earthquake response—most prominent was the claim of the Red Cross to build 130,000 homes, when in fact it only built six. Though the Red Cross has since disputed these claims, this fiasco pointed to an extreme example of NGOs’ lack of accountability to donors. Even when such efforts go as planned and are successful, the concern among many is that such efforts build short—term solutions without helping to restructure institutions that will last beyond the presence of these organizations.

Could legal regulations fix problems of accountability in disaster relief? If so, the need for those considerations is imminent: climate change means that similar disasters are likely to occur with greater frequency, so the need for effective long-term solutions will only become more pressing.