New Technology

Tax Software: Where Automation Falls Short

Kirk Johnson, MJLST Staffer

 

With the rise of automated legal technologies, sometimes we assume that any electronic automation is good. Unfortunately, that doesn’t translate so well for extremely complicated fields such as tax. This post will highlight the flaws in automated tax software and hopefully make the average taxpayer think twice before putting all of their faith in the hands of a program.

Last tax season, the Internal Revenue Service (“IRS”) awarded its Volunteer Income Tax Assistance (“VITA”) and Tax Counseling for the Elderly (“TCE”) contract to the tax software Taxslayer. For many low income taxpayers using these services, Taxslayer turned out to be a double-edged sword. The software failed to account for the Affordable Care Act’s tax penalty for uninsured individuals resulting in a myriad of incorrect returns. The burden was then thrust upon the taxpayers to file amended returns if they were even aware they were affected by the miscalculations. This is hardly the first time a major tax preparation software miscalculated returns.

American taxpayers, I ask you this: at what point does the headache of filing your own 1040 or the heartache of paying a CPA to prepare your return for you outweigh the risks associated with automated tax preparation services? The answer ultimately lies with the complication of your tax life, but the answer is a resounding “maybe.” The National Society of Accountants surveyed the market and found that the average cost of a 1040 without itemized deductions is $176 (up from $152 in 2014) while the preparation of a 1040 with itemized deductions and accompanying state tax return to be $273 (up from $261 in 2014). Many taxpayers can find a service like TurboTax or H&R Block if they make less than $64,000 per year (enjoy reading the terms of service to find additional state filing fees, the cost of unsupported forms, and more!). Taxpayers making less than $54,000 or 60 years or older can take advantage of the VITA program, a volunteer tax preparation service funded by the IRS. Filing your own 1040: priceless.

When a return is miscalculated, it’s up to the taxpayer to file an amended return lest the IRS fixes your return for you, penalizes you, charges you interest on the outstanding balance, and retains future returns to pay off the outstanding debt. I assume that for many people using software, your intentions are to avoid the hassle of doing your own math and reading through IRS publications on a Friday night. Most software will let you amend your return online, but only for the current tax year. Any older debt will need to be taken care of manually or with the assistance of a preparer.

VITA may seem like a great option for anyone under their income limits. Taxpayers with children can often take advantage of refundable credits that VITA volunteers are very experienced with. However, the Treasury Inspector General reported that only 39% of returns filed by VITA volunteers in 2011 were accurate. Even more fun, the current software the volunteers are using enjoyed three data breaches in the 2016 filing season. While the IRS is one of the leading providers of welfare in the United States (feeling more generous some years than they ought to be), the low income taxpayer may have more luck preparing their own returns.

Your friendly neighborhood CPA hopefully understands IRS publications, circulations, and revenue rulings better than the average tax software. Take this anecdotal story from CBS: TurboTax cost her $111.90, refunded her a total of $3,491 in federal and state taxes, and received a total of $3,379.10. Her friendly neighborhood CPA charged a hefty $400, received $3,831 in federal and state refunds, and received a total of $3,431. Again, not everyone is in the same tax position as this taxpayer, but the fact of the matter is that tax automation doesn’t always provide a cheaper, more convenient solution than the alternative. Your CPA should be able to interpret doubtful areas of tax law much more effectively than an automated program.

Filing yourself is great… provided, of course, you don’t trigger any audit-prone elements in IRS exams. You also get to enjoy a 57% accuracy rate at the IRS help center. Perhaps you enjoy reading the fabled IRS Publication 17 – a 293 page treatise filled with Treasury-favored tax positions or out-of-date advice. However, if you’re like many taxpayers in the United States, it might make sense to fill out a very simple 1040 with the standard deduction yourself. It’s free, and as long as you don’t take any outrageous tax positions, you may end up saving yourself the headache of dealing with an amended return from malfunctioning software.

My fellow taxpayers that read an entire post about tax preparation in November, I salute you. There is no simple answer when it comes to tax returns; however, in extremely complex legal realms like tax, automation isn’t necessarily the most convenient option. I look forward to furrowing my brow with you all this April to complete one of the most convoluted forms our government has to offer.


“Gaydar” Highlights the Need for Cognizant Facial Recognition Policy

Ellen Levish, MJLST Staffer

 

Recently, two Stanford researchers made a frightening claim; computers can use facial recognition algorithms to identify people as gay or straight.

 

One MJLST blog tackled facial recognition issues before back in 2012. Then, Rebecca Boxhorn posited that we shouldn’t worry too much, because “it is easy to overstate the danger” of emerging technology. In the wake of the “gaydar,” we should re-evaluate that position.

 

First, a little background. Facial recognition, like fingerprint recognition, relies on matching a subject to given standards. An algorithm measures points on a test-face, compares it to a standard face, and determines if the test is a close fit to the standard. The algorithm matches thousands of points on test pictures to reference points on standards. These test points include those you’d expect: nose width, eyebrow shape, intraocular distance. But the software also quantifies many “aspects of the face we don’t have words for.” In the case of the Stanford “gaydar,” researchers modified existing facial recognition software and used dating profile pictures as their standards. They fed in test pictures, also from dating profiles, and waited.

 

Recognizing patterns in these measurements, the Stanford study’s software determined if a test face was more like a standard “gay” or “straight” face. The model was accurate up to 91 percent of the time. That is higher than just chance, and far beyond human ability.

 

The Economist first broke the story on this study. As expected, it gained traction. Hyperbolic headlines littered tech blogs and magazines. And of course, when the dust settled, the “gaydar” scare wasn’t that straightforward. The “gaydar” algorithm was simple, the study was a draft posted online, and the results, though astounding, left a lot of room for both statistical and socio-political criticism. The researchers stated that their primary purpose in pursuing this inquiry was to “raise the alarm” about the dangers of facial recognition technology.

 

Facial recognition has become much more commonplace in recent years. Governments worldwide openly employ it for security purposes. Apple and Facebook both “recognize individuals in the videos you take” and the pictures you post online. Samsung allows smartphone users to unlock their device with a selfie. The Walt Disney Company, too, owns a huge database of facial recognition technology, which it uses (among other things) to determine how much you’ll laugh at movies. These current, commercial uses seem at worst benign and at best helpful. But the Stanford “gaydar” highlights the insidious, Orwellian nature of “function creep,” which policy makers need to keep an eye on.

 

Function creep “is the phenomenon by which a technology designed for a limited purpose may gain additional, unanticipated purposes or functions.” And it poses a major ethical problem for the use of facial recognition software. No doubt inspired developers will create new and enterprising means of analyzing people. No doubt most of these means will continue to be benign and commercial. But we must admit: classification based on appearance and/or affect is ripe for unintended consequences. The dystopian train of thought is easy to follow. It begs that we consider normative questions about facial recognition technology.

 

Who should be allowed to use facial recognition technologies? When are they allowed to use it? Under what conditions can users of facial technology store, share, and sell information?

 

The goal should be to keep facial recognition technology from doing harm. America has a disturbing dearth of regulation designed to protect citizens from ne’er-do-wells who have access to this technology. We should change that.

 

These normative questions can guide our future policy on the subject. At the very least, they should help us start thinking about cogent guidelines for the future use of facial recognition technology. The “gaydar” might not be cause for immediate alarm, but its implications are certainly worth a second thought. I’d recommend thinking on this sooner, rather than later.


Invisible Cryptography: Should Quantum Communications Be Subjected to Legal Restraint?

Jacob Weindling, MJLST Staffer

Sending secret messages across the world has traditionally required sending messages that risked interception or eavesdropping by unintended recipients. Letters sent on horseback, telegraphs sent over wires, and radio transmissions through the atmosphere were all theoretically capable of interception in transit between the sender and the receiver. This problem was particularly pronounced in World War II, when the Allies easily intercepted secret Axis transmissions and vice versa. To ensure secrecy the messages were consequently encoded, resulting in seemingly random jumbles of characters to unintended recipients.

Message encoding in World War II operated on two separate principles. For particularly sensitive messages, ‘one-time pads’ were created using (theoretically) random values as starting points. This technique for encryption, while essentially ‘unbreakable’ without access to a copy of the one-time pad, required both the sender and the recipient to hold identical copies of the pads. The second method used machines to transform plaintext messages into code. This second method, famously employed by Nazi Germany’s Enigma machine, substituted true randomness for a complicated but non-random algorithm that provided convenience and reliability. While Enigma proved a sufficient safeguard against traditional pen-and-paper codebreakers, early computers proved adept at quickly defeating the encryption, as dramatically highlighted in “The Imitation Game,” the recent film detailing Alan Turing’s invention of a codebreaking computer during World War II.

Perhaps unsurprisingly, cryptographic systems were added to the State Department’s International Traffic in Arms Regulations (“ITAR”) Munitions List shortly after World War II. Thus, while the U.S. government was severely limited in its ability to shield secret messages from foreign adversaries, it categorized the tools, methods, and development of cryptographic systems as munitions and severely regulated their export to foreign entities. While today the Department of State has narrowed the scope of cryptography to exclude civilian products, regulations remain on specialized military applications. A key assumption of this regulatory regime is that sensitive diplomatic and military information will be transmitted ‘in the clear’ for all who happen to have access to the channel of communications. While today many communications have moved from radio waves to fiberoptic cables, both systems remain vulnerable to surveillance over the air and online.

Last year however, China took a major step toward a vast departure in the philosophy of secret communication. With the launch of the Quess satellite, China hopes to enable quantum entanglement communication between two ground sites. The satellite would in principle transmit a photon to the ground, while retaining a photon that is ‘entangled’ with the released photon. Any changes to the photon on the satellite would thus be reflected in the photon on the ground, serving as a rudimentary method for transmitting binary information. This test comes on the heels of an experiment at Delft University of Technology in the Netherlands, which demonstrated the transmission of information between two electrons separated by a distance of 17 kilometers.

A unique feature of this mode of transmission is that information is not propagated from the sender to the receiver via radio waves, which can be intercepted, but rather via the principle of quantum entanglement. Any attempt to eavesdrop would theoretically be perfectly detectable, as the act of observing the photons being transmitted would potentially change their state and render the communication either unreadable or otherwise obviously tampered with. A system could therefore be developed to automatically cut off communications if disturbances are detected.

Interestingly enough, the U.S. Patent and Trademark Office has granted a patent that describes a similar method for transmitting information via quantum entanglement. The invention, claimed by Zhiliang Yuan and Andrew James Shields on behalf of Toshiba Corporation, was filed with the PTO on September 8, 2006 and published August 7, 2012. This patent builds on prior art that envisioned quantum cryptography, much of which was quietly filed with the PTO during the preceding two decades. Nevertheless, neither Congress nor the Department of State has acted to incorporate any reference to quantum communications into law, perhaps reflecting an unwillingness to address emerging technology that sounds like science fiction, as with self-driving cars and cyberspace before it.

Despite Congress’ history of lethargy in addressing new innovations and the State Department’s regulatory silence on the matter, legislative action or regulation may yet be premature. China has claimed its satellite has successfully sent a ‘hack-proof’ communication from its satellite, but the results have not been studied by the scientific community. Furthermore, no public demonstration has been made of a practical, non-laboratory quantum entanglement communication product. Even if the technology were to be brought to market, any early application will likely have severely low bandwidth by today’s standards, more closely resembling the telegraph than a gigabit internet connection. But with organizations around the world exploring ground- and space-based experiments with quantum communications, the technology appears poised to exit science fiction and enter practical application. Within the next generation, the codebreaking arms race may ultimately become obsolete, and Congress will be faced with a need to address the new secret communication regime.


Artificial Wombs and the Abortion Debate

Henry Rymer, MJLST Staffer

In a study published in the latter part of April 2017, a group of scientists reported that they had created an “extra-uterine system” that assisted in the gestation, and eventual birth, of several fetal lambs. This device, which houses the fetus in a clear plastic bag, is filled with a synthetic amniotic fluid that flows in and out of the bag through a pump system. While inside this artificial womb, the fetus is attached to a machine outside of the bag by its umbilical cord. This machine is used for several purposes: providing nutrition to the fetus, giving the fetus necessary medication, providing the blood of the fetus with a blend of air, oxygen and nitrogen, and removing carbon dioxide from the bloodstream. The scientists report that in housing the premature lamb fetuses in this system, the scientists were able to “maintain stable haemodynamics, have normal blood gas and oxygenation parameters, and maintain patency of the fetal circulation” within the fetuses. Additionally, the scientists report that the fetal lambs subject to this test were able to demonstrate “normal somatic growth, lung maturation and brain growth and myelination.” The scientists’ report that they believe that this extra-uterine system would not be relegated only to animal use, as they believe that the device could support a premature human infant “for up to four weeks.”

With the advent of this new piece of neonatal technology, specifically with the implications of what this invention (and others like it) would have on fetal development for humans, the artificial womb poses the power to completely shift the paradigm in regards to how the abortion debate is framed. In particular, the impact that this invention will have when combined with American jurisprudence will surely be a new point of contention between Pro-Abortion activists and their Anti-Abortion counterparts.

With the Supreme Court case of Planned Parenthood v. Casey, SCOTUS re-enshrined the thesis of Roe v. Wade: namely that women have the right to have an abortion prior to the viability of the fetus. Planned Parenthood of Southeastern Pa. v. Casey, 505 U.S. 833, 846. The Casey court also stated that states have the power to “restrict abortions after fetal viability, if the law contains exceptions for pregnancies which endanger the woman’s life or health” and that the “State has a legitimate interest from the outset of the pregnancy in protecting the health of the woman and the life of the fetus that may become a child.” Id.

The arguments that arise from the advent of an artificial womb in conjunction with case law flows from the notion of what a “viable” fetus would be after extra-uterine systems become more mainstream and sophisticated. If these machines develop to a point in which they can take a fetus the moment after conception and develop it for its entire gestation period, will abortion procedures become completely outlawed? Will “viability” remain the measure by which a fetus is distinguished from a human, or will a new metric be invented to replace “viability?” Additionally, will this be a problem that the courts will have to answer? The legislature? Or a combination of both? The invention of artificial wombs seems to be a periphery legal issue that will not have to be answered for some time yet. However, there are many questions that need to be answered as technology improves and develops, and the abortion debate will not be a topic that will remain untouched as humanity moves into the future.


Extending the Earth’s Life to Make It Off-World: Will Intellectual Property Law Allow Climate Change to Go Unchecked?

Daniel Green, MJLST Staffer

The National Aeronautics and Space Administration (NASA) recently discovered seven Earth-like planets. Three of these planets are even located the specific distance from the star, Trappist-1, in order to be considered in the proposed “Goldilocks zone” necessary to sustain life, thereby bringing about the conversation of whether a great migration for humanity is in order such as seen in movies of the last ten years such as Passengers, The Martian, Interstellar, even Wall-E. Even Elon Musk and Stephen Hawking have made statements that the human race needs to leave earth before the next extinction level event occurs. The possibility that these planets may be inhabitable presents some hope for a future to inhabit other planets.

Sadly, these planets are forty light years away (or 235 trillion miles). Although relatively near to Earth in astronomical terms, this fact means that there exists no possibility of reaching such a planet in a reasonable time with present technology despite the fact that NASA is increasing funding and creating institutes for such off worldly possibilities. As such, humankind needs to look inward to extend the life of our own planet in order to survive long enough to even consider such an exodus.

Admittedly, humanity faces many obstacles in its quest to survive long enough to reach other planets. One of the largest and direst is that of climate change. Specifically, the rise in the temperature of the Earth needs to be kept in check to keep it within bounds of the two-degree Celsius goal before 2100 C.E. Fortunately, technologies are well on the way of development to combat this threat. One of the most promising of these new technologies is that of solar climate engineering.

Solar climate engineering, also known as solar radiation management, is, essentially, a way to make the planet more reflective in order to block sunlight and thereby deter the increase in temperature caused by greenhouse gases. Though promising, Reynolds, Contreras, & Sarnoff predict that this new technology may be greatly hindered by intellectual property law in Solar Climate Engineering and Intellectual Property: Toward a Research Commons.

Since solar climate engineering is a relatively new scientific advancement, it can be greatly improved by the sharing of ideas. However, the intellectual property laws run directly contrary to this, begging the question as to why would anyone want to hinder technology so vital to the Earth’s survival. Well the answer lies in numerous reasons including the following three:

  • Patent “thickets” and the development of an “anti-commons”: This problem occurs when too many items in the same technological field are patented. This makes patents and innovations extremely difficult to patent around. As such, it causes scientific advancement to halt since patented technologies cannot be built upon or improved.
  • Relationship to trade secrets: Private entities that have financial interests in funding research may refuse to share advancements in order to protect the edge it gives them in the market.
  • Technological lock in: Broad patents at the beginning of research may force others to rely on technologies within the scope of the patent when working on future research and development. Such actions may ingrain a certain technology into society even though a better alternative may be available but not adopted.

There is no need to despair yet though since several steps can be taken to combat barriers to the advancement of solar climate engineering and promote communal technological advancement such as:

  • State interventions: Government can step in so as to ensure that intellectual property law does not hinder needed advancements for the good of humanity. They can do this in numerous action such as legislative and administrative actions, march-in rights, compulsory licensing, and asserting a control over funding.
  • Patent pools and pledges: Patent pools allow others to use one’s patents in development with the creation of an agreement to split the proceeds. Similarly, patent pledges, similarly, limit the enforcement of a patent holder by a promise in the form of a legally binding commitment. Though patent pools have more limitations legally, both of these incentivize the concept of sharing technology and furthering advancement.
  • Data commons: Government procurement and research funding can promote systematic data sharing in order to develop a broadly accessibly repository as a commons. Such methods ideally promote rapid scientific advancement by broadening the use and accessibility of each advancement through the discouragement of patents.

Providing that intellectual property laws do not stand in the way, humanity may very well have taken its first steps in extending its time to develop further technologies to, someday, live under the alien rays of Trappist-1.


Social and Legal Concerns as America Expands Into the Brain-Computer Interface

Daniel Baum, MJLST Staffer

A great deal of science and technology has been emerging in the field of the brain-computer interface, the connection between the human brain and machines. In addition to forming effective prosthetics and helping doctors repair brain damages, technology in the brain-computer interface has recently allowed a man to operate a prosthetic hand and an electric wheelchair with his mind using only a microelectrode array surgically implanted into his arm’s nerve fibers. The professor who developed the implant also experimented on himself, and made himself able to see in the dark: with an implant into the median nerve of his wrist, he could use the electric feedback from an ultrasonic range-finding sensor mounted on his hat to guide himself around a room blindfolded. Since this technology is still in its experimental stages, American law does not have much to say about human enhancements. Already, dangerous medical devices can lead to confusing and unfair trials, and it’s easy to imagine courtrooms getting even more confusing and unfair as medical devices progress into the brain-computer interface. This technology is close enough that the implementation of legal changes now could help this emerging technology develop in ways that will balance minimizing harm with utilizing its enormous potential to make people better.

Current laws impose no affirmative duty on manufacturers to allow pacemaker users access to their own data, and the top five manufacturers do not allow patients to access the data produced by their own pacemakers at all. As we begin to view machines as extensions of ourselves, in order to maintain our personal autonomy, we will need to be able to control who accesses the data we produce. This calls for an already necessary legal change: a right to access and control access to the data generated by objects that are effectively extensions of ourselves.

As this technology moves from healing disabled humans to giving normal people supernormal powers, its use will become much more widely pursued—“the disabled may prove more abled; we may all want their prostheses.” If other job applicants are capable of so much more because of their built-in brain-computer interface technology, employers may discriminate against natural, unenhanced humans. To protect people who cannot or who choose not to install machinery in the brain-computer interface, for financial, medical, ethical, religious, or any reasons, an independent statutory scheme with the purpose of eliminating discrimination both for and against individuals with brain-computer interface devices would not disturb the currently established disability protocols in the Americans with Disabilities Act and could be amended to account for each new form of machinery.

Another frightening concern arises once these enhancements become capable of connecting to the internet: if someone hacks into somebody else’s machinery and makes that person damage something or someone, who will be criminally and civilly liable for the damage? Since American law does not have much to say about human enhancements, no defense has been defined for the person who was hacked into and forced to cause harm. The person whose body actually committed the act could try pleading the affirmative defense of duress—that is, the defendant was compelled to commit the crime against his or her will or judgment—but the U.S. Supreme Court held in 2014 in Rosemond v. United States that “circumstances that traditionally would support a necessity or duress defense” require proof that the defendant “could have walked away.” The hacker took away the defendant’s control of his or her own body, making it impossible for the defendant to have walked away. To solve this problem, states that recognize the defense of insanity could amend their statutes to allow defendants who were mentally unable to control their own bodies due to hacking to plead the affirmative defense of insanity. States that conform to the Federal Rules of Criminal Procedure would then order the defendant to be mentally examined by an expert who could determine and tell the court to what extent the defendant was in control of his or her own mind and body at the time of the crime. The defendant could them implead the hacker to shift the liability for committing the crime. However, since the insanity defense is a mental health defense and brain-computer interface devices aren’t necessarily related to mental health, states may want to define a new affirmative defense for being hacked into that follows a similar procedure but that better fits the situation and that doesn’t carry the stigma of mental disorder.

New machinery in the brain-computer interface is exciting and will allow us both to heal physical and mental damages and to develop supernormal powers. Legal changes now could help this emerging technology develop in ways that will balance minimizing harms like invasions of privacy, discrimination, and hacking with utilizing its enormous potential to make people better.


Court’s Remain Unclear About Bitcoin’s Status

Paul Gaus, MJLST Staffer

Bitcoin touts itself as an “innovative payment network and a new kind of money.” Also known as “cryptocurrency,” Bitcoin was hatched out of a paper posted online by a mysterious gentleman named Satoshi Nakamoto (he has never been identified). The Bitcoin economy is quite complex, but it is generally based on the principle that Bitcoins are released into networks at a steady pace determined by algorithms.

Although once shrouded in ambiguity, Bitcoins threatened to upend (or “disrupt” in Silicon Valley speak) the payment industry. At their core, Bitcoins are just unique strings of information that users mine and typically store on their desktops. The list of companies that accept Bitcoins is growing and includes cable companies, professional sports teams, and even a fringe American political party. According to its proponents, Bitcoins offer lower transaction costs and increased privacy without inflation that affects fiat currency.

Technologies like Bitcoins do not come without interesting legal implications. One of the oft-cited downsides of Bitcoins is that they can facilitate criminal enterprises. In such cases, courts must address what status Bitcoins have in the current economy. The Southern District of New York recently held that Bitcoins were unequivocally a form of currency for purposes of criminal prosecution. In United States v. Murgio et al., Judge Alison Nathan determined Bitcoins are money because “Bitcoins can be accepted as payment for goods and services or bought directly from an exchange with a bank account . . . and are used as a medium of exchange and a means of payment.” By contrast, the IRS classifies virtual currency as property.
Bitcoins are uncertain, volatile, and complex, but they continue to be accepted as currency and show no signs of fading away. Going forward, the judiciary will need to streamline its treatment of Bitcoins.


Recent Ninth Circuit Ruling an Important One for State and Local Governments Seeking to Regulate Genetically Modified Plants

Jody Ferris, Note & Comment Editor

Genetically modified plants (GMOs) are and have always been a hot topic in agriculture and food policy.  Since they were first developed, groups have been lobbying at various levels of government to impose regulations on how they are grown or to have them banned outright. A noteworthy decision has come down for those following legal challenges to GMO regulation. In Alika Atay et al. v. County of Maui et al., the Ninth Circuit court in Hawaii has ruled that state and local governments may regulate the production of GMOs in their jurisdictions.

The original suit was filed by GMO proponents after the County of Maui enacted a ban on genetically modified crops.  The court held that federal regulation of GMOs does not preempt state and local regulation after the variety is commercialized. This means that the United States Department of Agriculture holds jurisdiction over all GMO varieties prior to commercialization, which is the period during development and testing before the variety is sold on the market. According to the Ninth Circuit, after the variety is commercialized, however, state and local governments are free to enact regulations, including outright bans of GMO production, without the need to worry about federal preemption.

Interestingly, the county regulations in Hawaii that were at issue in the suit were nonetheless stricken down by the court because the State of Hawaii already has a comprehensive regulatory scheme which the court held to preempt county GMO regulations.  This outcome disappointed local environmental and anti-GMO groups due to their support of the new county level GMO restrictions.  However, the decision will help clarify the respective regulatory responsibilities between individual counties and the State of Hawaii. Despite the disappointment of these groups, the decision that there is no federal preemption on regulation of commercialized GMO varieties is an important one for many of the states in the Ninth Circuit, as there are counties in Washington and California, for example, which have also enacted bans on GMO production.

This decision will likely be an encouraging one for states wishing to enact their own regulations for how GMO varieties are grown and handled.  It is also encouraging for individual counties who wish to enact GMO bans or county level regulations, should state level regulations not be preemptive.  It will certainly be interesting to follow how state and local governments structure any future regulatory activities in light of this ruling.


Navigating the Future of Self-Driving Car Insurance Coverage

Nathan Vanderlaan, MJLST Staffer

Autonomous vehicle technology is not new to the automotive industry. For the most part however, most of these technologies have been incorporated as back-up measures for when human error leads to poor driving. For instance, car manufactures have offered packages that incorporate features such as blind-spot monitoring, forward-collision warnings with automatic breaking, as well as lane-departure warnings and prevention. However, the recent push by companies like Google, Uber, Tesla, Ford and Volvo are making the possibility of fully autonomous vehicles a near-future reality.

Autonomous vehicles will arguably be the next greatest technology, that will be responsible for saving countless lives. According to alertdriving.com, over 90 percent of accidents are the result of human error. By taking human error out of the driving equation, The Atlantic estimates that the full implementation of automated cars could save up to 300,000 lives a decade in the United States alone. In a show of federal support, U.S. Transportation Secretary Anthony Foxx released an update in January 2016 to the National Highway Traffic Safety Administration’s (NHTSA) stance on Autonomous Vehicles, promulgating a set of 15 standards to be followed by car manufactures in developing such technologies. Further, in March 2016, the NHSTA promised $3.9 billion dollars in funding over 10 years to “support the development and adoption of safe vehicle automation.” As the world makes the push for fully autonomous vehicles, the insurance industry will have to respond to the changing nature of vehicular transportation.

One of the companies leading the innovative charge is Tesla. New Tesla models may now come equipped with an “autopilot” feature. This feature incorporates multiple external sensors that relay real-time data to a computer that navigates the vehicle in most highway situations.  It allows the car to slow down when it encounters obstacles, as well as change lanes when necessary. Elon Musk, Tesla’s CEO estimates that the autopilot feature is able to reduce Tesla driver accidents by as much as 50 percent. Still, the system is not without issue. This past June, a user of the autopilot system was killed when his car collided with a tractor trailer that the car’s sensors failed to detect. Tesla quickly distributed a software operating system that he claims would have been able to detect the trailer. The accident has quickly prompted the discussion of how insurance claims and coverage will adapt to accidents in which the owners of a vehicle are no longer cause of such accidents.

Auto Insurance is a state regulated industry. Currently, there are two significant insurance models: no-fault concepts, and the tort system. While each state system has many differences, each model has the same over-arching structure. No-fault insurance models require the insurer to pay parties injured in an accident regardless of fault. Under the tort system, the insurer of the party who is responsible for the accident foots the bill. Under both systems however, the majority of insurance premium costs are derived from personal liability coverage. A significant portion of insurance coverage structure is premised on the notion that drivers cause accidents. But when the driver is taken out of the equation, the basic concept behind automotive insurance changes.

 

What seems to be the most logical response to the implementation of fully-autonomous vehicles is to hold the manufacture liable. Whenever a car crashes that is engaged in a self-driving feature, it can be presumed that the crash was caused by a manufacturing defect. The injured party would then instigate a products-liability action to recover for damages suffered during the accident. Yet this system ignores some important realities. One such reality is that manufactures will likely place the new cost on the consumer in the purchase price of the car. These costs could leave a car outside the average consumer’s price range, and could hinder the wide-spread implementation of a safer automotive alternative to human-driven cars. Even if manufactures don’t rely on consumers to cover the bill, the new system will likely require new forms of regulation to protect car manufactures from going under due to overwhelming judgments in the courts.

Perhaps a more effective method of insurance coverage has been proposed by RAND, a company that specializes in evaluating and suggesting how best to utilize new technologies. RAND has suggested that a universal no-fault system be implemented for autonomous vehicle owners. Under such a system, autonomous car drivers would still pay premiums, but such premiums would be significantly lower as accident rates decrease. It is likely that for this system to work, regulation would have to come from the federal level to insure the policy is followed universally in the United States. One such company that has begun a system mirroring this philosophy is Adrian Flux in Britain. This insurer offers a plan for drivers of semi-autonomous vehicles that is lower in price than traditional insurance plans. Adrian Flux has also announced that it would update its policies as both the liability debate and driverless technology evolves.

No matter the route chosen by regulators or insurance companies, the issue of autonomous car insurance likely won’t arise until 2020 when Volvo plans to place commercial, fully-autonomous vehicles on the market. Even still, it could be decades before a majority of vehicles on the street have such capabilities. This time will give regulators, insurers, and manufactures alike, adequate time to develop a system that will best propel our nation towards a safer, autonomous automotive society.


Digital Health and Legal Aid: The Lawyer Will Skype You Now

Angela Fralish, MJLST Invited Blogger

According to Dr. Shirley Musich’s research article: Homebound Older Adults: Prevalence, Characteristics, Health Care Utilization and Quality of Care, homebound patients are among the top 5% of medical service users with persistently high expenses. As it stands, about 3.6 million homebound Americans are in need of continuous medical care, but with the cost of healthcare rising, the number of elderly people retiring, hospitals closing in increasing numbers and physician shortages anticipated, caring for the homebound is becoming expensive and impractical. In an article titled Care of the Chronically Ill at Home: An Unresolved Dilemma in Health Policy for the United States, author Karen Buhler-Wilkerson notes that even after two centuries of various experiments to deliver and finance home health care, there are still too many unresolved issues.

One potential solution could be at the crossroads of technology, medicine and law. Telemedicine is a well-known medical technology providing cost effective medical care for the homebound. Becker’s reports that telemedicine visits are often more affordable, and access is a very important component, both in the sense of enabling patients to communicate through a smartphone, and the ability for clinicians to reach patients at a distance, particularly those for whom travel to a hospital on a weekly basis for necessary follow-ups or check-ins would be costly and is not feasible. Telemedicine is a form of affordable technology reaching homebound patients.

Legal aid organizations are also beginning to integrate virtual services for the homebound. For example, at Illinois Legal Aid Online, clients are able to have a live consultation with a legal professional, and in Maryland, a virtual courthouse is used for alternative dispute resolution proceedings. Some states, such as Alaska and New York, have advocated for virtual consults and hearings as part of a best practices model. On September 22nd of this year, the ABA launched a free virtual legal advice clinic to operate as an online version of a walk in clinic. However, despite these responsive measures, virtual technology for legal aid is expensive and burdensome.

But what about the cancer patient who can’t get out of bed to come in for a legal aid appointment, but needs help with a disability claim to pay their medical bills? Could diversifying telehealth user interfaces help cure the accessibility gap for both medicine and law?

Some organizations have already begun collaborations to address these issues. Medical Legal Partnerships work together to provide comprehensive care through cost effective resource pooling of business funds and federal and corporate grant money. Partnerships resolve the sociolegal determinants impacting the health of a patient. One classic case example is the homebound patient with aggravated asthma living in a house with mold spores.  A lawyer works to get the housing up to code, which reduces the asthma, and consequently future medical costs. Lawyers resolve the economic factors perpetuating a health condition while physicians treat it biologically. These partnerships are being implemented nationwide because of their proven results in decreasing the cost of care. In the case of telehealth, the homebound asthmatic patient, could log on to their computer, or work through an app on their phone, to show the attorney the living conditions in high resolution, in addition to receiving medical treatment.

The government seems to be favorable to these resolutions. The Health Resources and Services Administration allocated $18 million to health center collaborations seeking to improve quality care through health information technology. Further, the FDA has created the Digital Health program to encourage and foster collaborations in technologies to promote public health. Last year alone, Congress awarded $4 million to the Legal Services Corporation, who then disbursed that money among 15 legal aid organizations, many of which “will use technology to connect low-income populations to resources and services.” Telehealth innovation is a cornerstone for medical and legal professions committed to improvements in low cost quality patient care, especially for the homebound.

Medical facilities could even extend this same technology profitably by offering patients an in-house “attorney consult” service to improve quality of care. Much like the invention of the convenient cordless phone, a telehealth phone could be used in house or outpatient to give a health organization a leading market edge in addition to decreasing costs. Technology has yet to fully develop the number of ways that telehealth can be used to deliver legal services to improve healthcare.

So if there is a multidisciplinary call for digital aid, why aren’t we seeing more of it on a daily basis? For one, the regulatory law landscape may cause confusion. The FDA governs medical devices, the FTC regulates PHI data breaches and the FCC governs devices using broadcast services or electromagnetic spectrum. Telehealth touches on all of these and results in jurisdictional overlap amongst regulatory agencies. Other reasons may involve resistance to new technology and ever-evolving legislation and policies. In Teladoc, Inc., v. Texas Medical Board, a standard of care issue was raised when the medical board issued an injunction for physicians who prescribed medicine, but had not yet seen the patient in person. One physician in the case stated that without telehealth, his homebound patient would receive no treatment. Transitioning from traditional in person consultations to virtual assistance can greatly improve the health of patient, but has brought an entourage of notable concerns.

Allegedly, the use of telehealth was first executed by Alexander Graham Bell in 1876 when he made a phone call to his doctor. Over 140 years later, this technology is used by NASA for outer space health consults. While the technology is still relatively new, especially for collaborative patient treatment by doctors and lawyers, used wisely, it can be an interdisciplinary collaborative renaissance in using technology to improve healthcare systems and patient lives.

From all perspectives, virtual aid is well funded future component of both the medical and legal fields. It can be used in the legal sense to help people in need, in the business sense as an ancillary convenience service generating profits, or in the medical sense to provide care for the homebound. The trick will be to find engineers who can secure multiuse interfaces while meeting federal regulations and public demand. Only time will tell if such a tool can be efficiently developed.