New Technology

Apple Inc. Under Continued Scrutiny After iPhone Throttling Admission

MJLST Staffer, Alex Eschenroeder

While innovative tech companies typically receive widespread attention for increasing the speed and performance of a given device, Apple Inc. has received attention in the past few weeks for exactly the opposite reason. Apple’s actions have caught the attention of consumers and consumer advocates around the world, and recently, they have caught the attention of the US Department of Justice (DOJ) and the Securities and Exchange Commission (SEC) as well.

 

The action at issue is Apple’s intentional throttling, or slowing down, of iPhone performance. Apple apologized for its intentional throttling on December 28, 2017, in reaction to building pressure from “users and tech analysts” who noticed iPhone slowdowns. In its apology message, Apple focused on the risk of unexpected phone shutdowns resulting from the fact that “[a] chemically aged battery also becomes less capable of delivering peak energy loads, especially in a low state of charge.” Apple asserted that it addressed this risk by delivering an iOS (iPhone operating system) update that “dynamically manages the maximum performance of some system components when needed to prevent a shutdown.” In addition to providing its explanation behind the throttling in its message, Apple announced a fifty dollar discount for iPhone battery replacements. However, replacement availability has been limited, and the discount has not stopped investigations and inquiries from multiple parties.

 

Shortly after Apple’s admission, consumer and watchdog groups in France, Italy, and China, submitted questions to Apple. The French consumer group, “Stop Programmed Obsolescence,” filed a complaint in December alleging “that Apple pressures customers to buy new phones by timing the release of new models with operating system upgrades that cause older ones to perform less well.” This complaint sparked an investigation by the Paris prosecutor’s office. Another source of questioning has been from within the US Senate, as South Dakota Senator John Thune wrote a letter to Apple CEO Tim Cook that “pressed Apple for answers to a series of questions about how the company decided to throttle back iPhone processing performance in phones with older batteries.”

 

In addition to these sources of pressure, the latest major development is that the SEC and DOJ have initiated their own probes. Both the SEC and the DOJ declined to comment about their investigations. Further, “Apple acknowledged in a statement that it is responding to questions from some government agencies, though it declined to disclose which agencies or any details regarding the questions.” Thus, very little is known at this point about the substance of the investigations. Current speculation includes that, in this type of case, “the SEC could try to fault a public company for failing to make timely disclosures about material information that would affect the stock price.”

 

While a more superficial investigation is possible, it would likely leave critical questions unaddressed. Some questions I would like to vent to Apple are as follows: If Apple’s battery issues cause peak energy load delivery problems primarily in a low state of charge, why does the dynamic management system coded into iOS slow down app launch times even at or near full charge? If the iOS update manages max performance of system components when needed to prevent a shutdown, does that mean a phone that takes longer to launch any given app on any given launch is constantly at risk for shutting down? What would it mean when Apple releases code to deactivate throttling and an iPhone with previously slow app launch times doesn’t turn off immediately? How many other devices does Apple throttling apply to, and when might Apple admit to them? Looking at you, Apple Watch.

 

These questions are not expertly devised, but they represent a reality that Apple will have to grapple with in the coming months: when so many people use your product frequently, there are mountains of user experiences that could be referenced to throw any “explanation” into question. These experiences may help to debunk any likely stories that vary significantly from the truth.


The Electric Vehicle: A Microcosm for America’s Problem With Innovation

Zach Sibley, MJLST Staffer

 

Last year, former U.S. Patent and Trademark Office Director, David Kappos, criticized a series of changes in patent legislation and case law for weakening innovation protections and driving technology investments towards China. Since then it has become apparent that America’s problem with innovation runs deeper than just the strength of U.S. patent rights. State and federal policies toward new industries also appear to be trending against domestic innovation. One illustrative example is the electric vehicle (EV).

 

EVs offer better technological upsides than their internal combustion engine vehicle (ICEV) counterparts. Most notably, as our US grid system moves toward “smart” infrastructure that leverages the Internet of Things, EVs can interact with the grid and assist in maximizing the efficiency of its infrastructure in ways not possible with ICEVs. Additionally, with clean air and emission targets imminent—like those in the Clean Air Act or in more stringent state legislation—EVs offer the most immediate impact in reducing mobile source air pollutants, especially in a sector that recently became the highest carbon dioxide emitter. And finally, EVs present electrical utilities that are facing a “death spiral” an opportunity to recover profits by increasing electricity demand.   

 

Recent state and federal policy changes, however, may hinder efforts of EV innovators. Eighteen state legislators have enacted EV fees—including Wisconsin’s recent adoption, and the overturned fee in Oklahoma—ranging from $50 to $300 in some states. Proponents claim the fee creates parity between traditional ICEV drivers and the new EV drivers not paying fuel taxes that fund maintenance of transportation infrastructure. Recent findings, though, suggest EV drivers in some states with the fee were previously paying more upfront in taxes than their ICEV road-mates. The fee also only creates parity when solely focused on the wear and tear all vehicles cause on shared road infrastructure. The calculus for these fees often neglects that EV and ICEV drivers also share the same air resources and yet no tax accompanies EV fees that would also charge ICEVs for their share of wear and tear on air quality.

 

At the federal level, changes in administrative policy are poised to exacerbate the problem further. The freshly proposed GOP tax bill includes a provision to repeal a $7,500 rebate that has made lower cost EVs a more affordable option for middle class drivers. This change should be contrasted with foreign efforts, such as those in the European Union to increase CO2 reduction targets and offer credits for EV purchases. The contrast can be summed up with one commentator’s observation regarding The New York Times who reported, within the span of a few days, about the U.S. EPA’s rollback of the Clean Power Plan and then about General Motors moving toward a full electric line in response to the Chinese government. The latter story harkens back to Kappos’ comments at the beginning of this post, where again a changing U.S. legal and regulatory landscaping is driving innovation elsewhere.

 

It is a basic tenant of economics that incentives matter. Even in a state with a robust EV presence like California, critics question the wisdom of assessing fees and repealing incentives this early in a nascent industry offering a promising technological future. The U.S. used to be great because it was the world’s gold standard for innovation: the first light bulb, the first car, the first airplane, the first to the moon, and the first personal computers (to name a few). Our laws need to continue to reflect our innovative identity. Hopefully, with legislation like the STRONG Patents Act of 2017 and a series of state EV incentives on the horizon, we can return to our great innovative roots.


Tax Software: Where Automation Falls Short

Kirk Johnson, MJLST Staffer

 

With the rise of automated legal technologies, sometimes we assume that any electronic automation is good. Unfortunately, that doesn’t translate so well for extremely complicated fields such as tax. This post will highlight the flaws in automated tax software and hopefully make the average taxpayer think twice before putting all of their faith in the hands of a program.

Last tax season, the Internal Revenue Service (“IRS”) awarded its Volunteer Income Tax Assistance (“VITA”) and Tax Counseling for the Elderly (“TCE”) contract to the tax software Taxslayer. For many low income taxpayers using these services, Taxslayer turned out to be a double-edged sword. The software failed to account for the Affordable Care Act’s tax penalty for uninsured individuals resulting in a myriad of incorrect returns. The burden was then thrust upon the taxpayers to file amended returns if they were even aware they were affected by the miscalculations. This is hardly the first time a major tax preparation software miscalculated returns.

American taxpayers, I ask you this: at what point does the headache of filing your own 1040 or the heartache of paying a CPA to prepare your return for you outweigh the risks associated with automated tax preparation services? The answer ultimately lies with the complication of your tax life, but the answer is a resounding “maybe.” The National Society of Accountants surveyed the market and found that the average cost of a 1040 without itemized deductions is $176 (up from $152 in 2014) while the preparation of a 1040 with itemized deductions and accompanying state tax return to be $273 (up from $261 in 2014). Many taxpayers can find a service like TurboTax or H&R Block if they make less than $64,000 per year (enjoy reading the terms of service to find additional state filing fees, the cost of unsupported forms, and more!). Taxpayers making less than $54,000 or 60 years or older can take advantage of the VITA program, a volunteer tax preparation service funded by the IRS. Filing your own 1040: priceless.

When a return is miscalculated, it’s up to the taxpayer to file an amended return lest the IRS fixes your return for you, penalizes you, charges you interest on the outstanding balance, and retains future returns to pay off the outstanding debt. I assume that for many people using software, your intentions are to avoid the hassle of doing your own math and reading through IRS publications on a Friday night. Most software will let you amend your return online, but only for the current tax year. Any older debt will need to be taken care of manually or with the assistance of a preparer.

VITA may seem like a great option for anyone under their income limits. Taxpayers with children can often take advantage of refundable credits that VITA volunteers are very experienced with. However, the Treasury Inspector General reported that only 39% of returns filed by VITA volunteers in 2011 were accurate. Even more fun, the current software the volunteers are using enjoyed three data breaches in the 2016 filing season. While the IRS is one of the leading providers of welfare in the United States (feeling more generous some years than they ought to be), the low income taxpayer may have more luck preparing their own returns.

Your friendly neighborhood CPA hopefully understands IRS publications, circulations, and revenue rulings better than the average tax software. Take this anecdotal story from CBS: TurboTax cost her $111.90, refunded her a total of $3,491 in federal and state taxes, and received a total of $3,379.10. Her friendly neighborhood CPA charged a hefty $400, received $3,831 in federal and state refunds, and received a total of $3,431. Again, not everyone is in the same tax position as this taxpayer, but the fact of the matter is that tax automation doesn’t always provide a cheaper, more convenient solution than the alternative. Your CPA should be able to interpret doubtful areas of tax law much more effectively than an automated program.

Filing yourself is great… provided, of course, you don’t trigger any audit-prone elements in IRS exams. You also get to enjoy a 57% accuracy rate at the IRS help center. Perhaps you enjoy reading the fabled IRS Publication 17 – a 293 page treatise filled with Treasury-favored tax positions or out-of-date advice. However, if you’re like many taxpayers in the United States, it might make sense to fill out a very simple 1040 with the standard deduction yourself. It’s free, and as long as you don’t take any outrageous tax positions, you may end up saving yourself the headache of dealing with an amended return from malfunctioning software.

My fellow taxpayers that read an entire post about tax preparation in November, I salute you. There is no simple answer when it comes to tax returns; however, in extremely complex legal realms like tax, automation isn’t necessarily the most convenient option. I look forward to furrowing my brow with you all this April to complete one of the most convoluted forms our government has to offer.


“Gaydar” Highlights the Need for Cognizant Facial Recognition Policy

Ellen Levish, MJLST Staffer

 

Recently, two Stanford researchers made a frightening claim; computers can use facial recognition algorithms to identify people as gay or straight.

 

One MJLST blog tackled facial recognition issues before back in 2012. Then, Rebecca Boxhorn posited that we shouldn’t worry too much, because “it is easy to overstate the danger” of emerging technology. In the wake of the “gaydar,” we should re-evaluate that position.

 

First, a little background. Facial recognition, like fingerprint recognition, relies on matching a subject to given standards. An algorithm measures points on a test-face, compares it to a standard face, and determines if the test is a close fit to the standard. The algorithm matches thousands of points on test pictures to reference points on standards. These test points include those you’d expect: nose width, eyebrow shape, intraocular distance. But the software also quantifies many “aspects of the face we don’t have words for.” In the case of the Stanford “gaydar,” researchers modified existing facial recognition software and used dating profile pictures as their standards. They fed in test pictures, also from dating profiles, and waited.

 

Recognizing patterns in these measurements, the Stanford study’s software determined if a test face was more like a standard “gay” or “straight” face. The model was accurate up to 91 percent of the time. That is higher than just chance, and far beyond human ability.

 

The Economist first broke the story on this study. As expected, it gained traction. Hyperbolic headlines littered tech blogs and magazines. And of course, when the dust settled, the “gaydar” scare wasn’t that straightforward. The “gaydar” algorithm was simple, the study was a draft posted online, and the results, though astounding, left a lot of room for both statistical and socio-political criticism. The researchers stated that their primary purpose in pursuing this inquiry was to “raise the alarm” about the dangers of facial recognition technology.

 

Facial recognition has become much more commonplace in recent years. Governments worldwide openly employ it for security purposes. Apple and Facebook both “recognize individuals in the videos you take” and the pictures you post online. Samsung allows smartphone users to unlock their device with a selfie. The Walt Disney Company, too, owns a huge database of facial recognition technology, which it uses (among other things) to determine how much you’ll laugh at movies. These current, commercial uses seem at worst benign and at best helpful. But the Stanford “gaydar” highlights the insidious, Orwellian nature of “function creep,” which policy makers need to keep an eye on.

 

Function creep “is the phenomenon by which a technology designed for a limited purpose may gain additional, unanticipated purposes or functions.” And it poses a major ethical problem for the use of facial recognition software. No doubt inspired developers will create new and enterprising means of analyzing people. No doubt most of these means will continue to be benign and commercial. But we must admit: classification based on appearance and/or affect is ripe for unintended consequences. The dystopian train of thought is easy to follow. It begs that we consider normative questions about facial recognition technology.

 

Who should be allowed to use facial recognition technologies? When are they allowed to use it? Under what conditions can users of facial technology store, share, and sell information?

 

The goal should be to keep facial recognition technology from doing harm. America has a disturbing dearth of regulation designed to protect citizens from ne’er-do-wells who have access to this technology. We should change that.

 

These normative questions can guide our future policy on the subject. At the very least, they should help us start thinking about cogent guidelines for the future use of facial recognition technology. The “gaydar” might not be cause for immediate alarm, but its implications are certainly worth a second thought. I’d recommend thinking on this sooner, rather than later.


Invisible Cryptography: Should Quantum Communications Be Subjected to Legal Restraint?

Jacob Weindling, MJLST Staffer

Sending secret messages across the world has traditionally required sending messages that risked interception or eavesdropping by unintended recipients. Letters sent on horseback, telegraphs sent over wires, and radio transmissions through the atmosphere were all theoretically capable of interception in transit between the sender and the receiver. This problem was particularly pronounced in World War II, when the Allies easily intercepted secret Axis transmissions and vice versa. To ensure secrecy the messages were consequently encoded, resulting in seemingly random jumbles of characters to unintended recipients.

Message encoding in World War II operated on two separate principles. For particularly sensitive messages, ‘one-time pads’ were created using (theoretically) random values as starting points. This technique for encryption, while essentially ‘unbreakable’ without access to a copy of the one-time pad, required both the sender and the recipient to hold identical copies of the pads. The second method used machines to transform plaintext messages into code. This second method, famously employed by Nazi Germany’s Enigma machine, substituted true randomness for a complicated but non-random algorithm that provided convenience and reliability. While Enigma proved a sufficient safeguard against traditional pen-and-paper codebreakers, early computers proved adept at quickly defeating the encryption, as dramatically highlighted in “The Imitation Game,” the recent film detailing Alan Turing’s invention of a codebreaking computer during World War II.

Perhaps unsurprisingly, cryptographic systems were added to the State Department’s International Traffic in Arms Regulations (“ITAR”) Munitions List shortly after World War II. Thus, while the U.S. government was severely limited in its ability to shield secret messages from foreign adversaries, it categorized the tools, methods, and development of cryptographic systems as munitions and severely regulated their export to foreign entities. While today the Department of State has narrowed the scope of cryptography to exclude civilian products, regulations remain on specialized military applications. A key assumption of this regulatory regime is that sensitive diplomatic and military information will be transmitted ‘in the clear’ for all who happen to have access to the channel of communications. While today many communications have moved from radio waves to fiberoptic cables, both systems remain vulnerable to surveillance over the air and online.

Last year however, China took a major step toward a vast departure in the philosophy of secret communication. With the launch of the Quess satellite, China hopes to enable quantum entanglement communication between two ground sites. The satellite would in principle transmit a photon to the ground, while retaining a photon that is ‘entangled’ with the released photon. Any changes to the photon on the satellite would thus be reflected in the photon on the ground, serving as a rudimentary method for transmitting binary information. This test comes on the heels of an experiment at Delft University of Technology in the Netherlands, which demonstrated the transmission of information between two electrons separated by a distance of 17 kilometers.

A unique feature of this mode of transmission is that information is not propagated from the sender to the receiver via radio waves, which can be intercepted, but rather via the principle of quantum entanglement. Any attempt to eavesdrop would theoretically be perfectly detectable, as the act of observing the photons being transmitted would potentially change their state and render the communication either unreadable or otherwise obviously tampered with. A system could therefore be developed to automatically cut off communications if disturbances are detected.

Interestingly enough, the U.S. Patent and Trademark Office has granted a patent that describes a similar method for transmitting information via quantum entanglement. The invention, claimed by Zhiliang Yuan and Andrew James Shields on behalf of Toshiba Corporation, was filed with the PTO on September 8, 2006 and published August 7, 2012. This patent builds on prior art that envisioned quantum cryptography, much of which was quietly filed with the PTO during the preceding two decades. Nevertheless, neither Congress nor the Department of State has acted to incorporate any reference to quantum communications into law, perhaps reflecting an unwillingness to address emerging technology that sounds like science fiction, as with self-driving cars and cyberspace before it.

Despite Congress’ history of lethargy in addressing new innovations and the State Department’s regulatory silence on the matter, legislative action or regulation may yet be premature. China has claimed its satellite has successfully sent a ‘hack-proof’ communication from its satellite, but the results have not been studied by the scientific community. Furthermore, no public demonstration has been made of a practical, non-laboratory quantum entanglement communication product. Even if the technology were to be brought to market, any early application will likely have severely low bandwidth by today’s standards, more closely resembling the telegraph than a gigabit internet connection. But with organizations around the world exploring ground- and space-based experiments with quantum communications, the technology appears poised to exit science fiction and enter practical application. Within the next generation, the codebreaking arms race may ultimately become obsolete, and Congress will be faced with a need to address the new secret communication regime.


Artificial Wombs and the Abortion Debate

Henry Rymer, MJLST Staffer

In a study published in the latter part of April 2017, a group of scientists reported that they had created an “extra-uterine system” that assisted in the gestation, and eventual birth, of several fetal lambs. This device, which houses the fetus in a clear plastic bag, is filled with a synthetic amniotic fluid that flows in and out of the bag through a pump system. While inside this artificial womb, the fetus is attached to a machine outside of the bag by its umbilical cord. This machine is used for several purposes: providing nutrition to the fetus, giving the fetus necessary medication, providing the blood of the fetus with a blend of air, oxygen and nitrogen, and removing carbon dioxide from the bloodstream. The scientists report that in housing the premature lamb fetuses in this system, the scientists were able to “maintain stable haemodynamics, have normal blood gas and oxygenation parameters, and maintain patency of the fetal circulation” within the fetuses. Additionally, the scientists report that the fetal lambs subject to this test were able to demonstrate “normal somatic growth, lung maturation and brain growth and myelination.” The scientists’ report that they believe that this extra-uterine system would not be relegated only to animal use, as they believe that the device could support a premature human infant “for up to four weeks.”

With the advent of this new piece of neonatal technology, specifically with the implications of what this invention (and others like it) would have on fetal development for humans, the artificial womb poses the power to completely shift the paradigm in regards to how the abortion debate is framed. In particular, the impact that this invention will have when combined with American jurisprudence will surely be a new point of contention between Pro-Abortion activists and their Anti-Abortion counterparts.

With the Supreme Court case of Planned Parenthood v. Casey, SCOTUS re-enshrined the thesis of Roe v. Wade: namely that women have the right to have an abortion prior to the viability of the fetus. Planned Parenthood of Southeastern Pa. v. Casey, 505 U.S. 833, 846. The Casey court also stated that states have the power to “restrict abortions after fetal viability, if the law contains exceptions for pregnancies which endanger the woman’s life or health” and that the “State has a legitimate interest from the outset of the pregnancy in protecting the health of the woman and the life of the fetus that may become a child.” Id.

The arguments that arise from the advent of an artificial womb in conjunction with case law flows from the notion of what a “viable” fetus would be after extra-uterine systems become more mainstream and sophisticated. If these machines develop to a point in which they can take a fetus the moment after conception and develop it for its entire gestation period, will abortion procedures become completely outlawed? Will “viability” remain the measure by which a fetus is distinguished from a human, or will a new metric be invented to replace “viability?” Additionally, will this be a problem that the courts will have to answer? The legislature? Or a combination of both? The invention of artificial wombs seems to be a periphery legal issue that will not have to be answered for some time yet. However, there are many questions that need to be answered as technology improves and develops, and the abortion debate will not be a topic that will remain untouched as humanity moves into the future.


Extending the Earth’s Life to Make It Off-World: Will Intellectual Property Law Allow Climate Change to Go Unchecked?

Daniel Green, MJLST Staffer

The National Aeronautics and Space Administration (NASA) recently discovered seven Earth-like planets. Three of these planets are even located the specific distance from the star, Trappist-1, in order to be considered in the proposed “Goldilocks zone” necessary to sustain life, thereby bringing about the conversation of whether a great migration for humanity is in order such as seen in movies of the last ten years such as Passengers, The Martian, Interstellar, even Wall-E. Even Elon Musk and Stephen Hawking have made statements that the human race needs to leave earth before the next extinction level event occurs. The possibility that these planets may be inhabitable presents some hope for a future to inhabit other planets.

Sadly, these planets are forty light years away (or 235 trillion miles). Although relatively near to Earth in astronomical terms, this fact means that there exists no possibility of reaching such a planet in a reasonable time with present technology despite the fact that NASA is increasing funding and creating institutes for such off worldly possibilities. As such, humankind needs to look inward to extend the life of our own planet in order to survive long enough to even consider such an exodus.

Admittedly, humanity faces many obstacles in its quest to survive long enough to reach other planets. One of the largest and direst is that of climate change. Specifically, the rise in the temperature of the Earth needs to be kept in check to keep it within bounds of the two-degree Celsius goal before 2100 C.E. Fortunately, technologies are well on the way of development to combat this threat. One of the most promising of these new technologies is that of solar climate engineering.

Solar climate engineering, also known as solar radiation management, is, essentially, a way to make the planet more reflective in order to block sunlight and thereby deter the increase in temperature caused by greenhouse gases. Though promising, Reynolds, Contreras, & Sarnoff predict that this new technology may be greatly hindered by intellectual property law in Solar Climate Engineering and Intellectual Property: Toward a Research Commons.

Since solar climate engineering is a relatively new scientific advancement, it can be greatly improved by the sharing of ideas. However, the intellectual property laws run directly contrary to this, begging the question as to why would anyone want to hinder technology so vital to the Earth’s survival. Well the answer lies in numerous reasons including the following three:

  • Patent “thickets” and the development of an “anti-commons”: This problem occurs when too many items in the same technological field are patented. This makes patents and innovations extremely difficult to patent around. As such, it causes scientific advancement to halt since patented technologies cannot be built upon or improved.
  • Relationship to trade secrets: Private entities that have financial interests in funding research may refuse to share advancements in order to protect the edge it gives them in the market.
  • Technological lock in: Broad patents at the beginning of research may force others to rely on technologies within the scope of the patent when working on future research and development. Such actions may ingrain a certain technology into society even though a better alternative may be available but not adopted.

There is no need to despair yet though since several steps can be taken to combat barriers to the advancement of solar climate engineering and promote communal technological advancement such as:

  • State interventions: Government can step in so as to ensure that intellectual property law does not hinder needed advancements for the good of humanity. They can do this in numerous action such as legislative and administrative actions, march-in rights, compulsory licensing, and asserting a control over funding.
  • Patent pools and pledges: Patent pools allow others to use one’s patents in development with the creation of an agreement to split the proceeds. Similarly, patent pledges, similarly, limit the enforcement of a patent holder by a promise in the form of a legally binding commitment. Though patent pools have more limitations legally, both of these incentivize the concept of sharing technology and furthering advancement.
  • Data commons: Government procurement and research funding can promote systematic data sharing in order to develop a broadly accessibly repository as a commons. Such methods ideally promote rapid scientific advancement by broadening the use and accessibility of each advancement through the discouragement of patents.

Providing that intellectual property laws do not stand in the way, humanity may very well have taken its first steps in extending its time to develop further technologies to, someday, live under the alien rays of Trappist-1.


Social and Legal Concerns as America Expands Into the Brain-Computer Interface

Daniel Baum, MJLST Staffer

A great deal of science and technology has been emerging in the field of the brain-computer interface, the connection between the human brain and machines. In addition to forming effective prosthetics and helping doctors repair brain damages, technology in the brain-computer interface has recently allowed a man to operate a prosthetic hand and an electric wheelchair with his mind using only a microelectrode array surgically implanted into his arm’s nerve fibers. The professor who developed the implant also experimented on himself, and made himself able to see in the dark: with an implant into the median nerve of his wrist, he could use the electric feedback from an ultrasonic range-finding sensor mounted on his hat to guide himself around a room blindfolded. Since this technology is still in its experimental stages, American law does not have much to say about human enhancements. Already, dangerous medical devices can lead to confusing and unfair trials, and it’s easy to imagine courtrooms getting even more confusing and unfair as medical devices progress into the brain-computer interface. This technology is close enough that the implementation of legal changes now could help this emerging technology develop in ways that will balance minimizing harm with utilizing its enormous potential to make people better.

Current laws impose no affirmative duty on manufacturers to allow pacemaker users access to their own data, and the top five manufacturers do not allow patients to access the data produced by their own pacemakers at all. As we begin to view machines as extensions of ourselves, in order to maintain our personal autonomy, we will need to be able to control who accesses the data we produce. This calls for an already necessary legal change: a right to access and control access to the data generated by objects that are effectively extensions of ourselves.

As this technology moves from healing disabled humans to giving normal people supernormal powers, its use will become much more widely pursued—“the disabled may prove more abled; we may all want their prostheses.” If other job applicants are capable of so much more because of their built-in brain-computer interface technology, employers may discriminate against natural, unenhanced humans. To protect people who cannot or who choose not to install machinery in the brain-computer interface, for financial, medical, ethical, religious, or any reasons, an independent statutory scheme with the purpose of eliminating discrimination both for and against individuals with brain-computer interface devices would not disturb the currently established disability protocols in the Americans with Disabilities Act and could be amended to account for each new form of machinery.

Another frightening concern arises once these enhancements become capable of connecting to the internet: if someone hacks into somebody else’s machinery and makes that person damage something or someone, who will be criminally and civilly liable for the damage? Since American law does not have much to say about human enhancements, no defense has been defined for the person who was hacked into and forced to cause harm. The person whose body actually committed the act could try pleading the affirmative defense of duress—that is, the defendant was compelled to commit the crime against his or her will or judgment—but the U.S. Supreme Court held in 2014 in Rosemond v. United States that “circumstances that traditionally would support a necessity or duress defense” require proof that the defendant “could have walked away.” The hacker took away the defendant’s control of his or her own body, making it impossible for the defendant to have walked away. To solve this problem, states that recognize the defense of insanity could amend their statutes to allow defendants who were mentally unable to control their own bodies due to hacking to plead the affirmative defense of insanity. States that conform to the Federal Rules of Criminal Procedure would then order the defendant to be mentally examined by an expert who could determine and tell the court to what extent the defendant was in control of his or her own mind and body at the time of the crime. The defendant could them implead the hacker to shift the liability for committing the crime. However, since the insanity defense is a mental health defense and brain-computer interface devices aren’t necessarily related to mental health, states may want to define a new affirmative defense for being hacked into that follows a similar procedure but that better fits the situation and that doesn’t carry the stigma of mental disorder.

New machinery in the brain-computer interface is exciting and will allow us both to heal physical and mental damages and to develop supernormal powers. Legal changes now could help this emerging technology develop in ways that will balance minimizing harms like invasions of privacy, discrimination, and hacking with utilizing its enormous potential to make people better.


Court’s Remain Unclear About Bitcoin’s Status

Paul Gaus, MJLST Staffer

Bitcoin touts itself as an “innovative payment network and a new kind of money.” Also known as “cryptocurrency,” Bitcoin was hatched out of a paper posted online by a mysterious gentleman named Satoshi Nakamoto (he has never been identified). The Bitcoin economy is quite complex, but it is generally based on the principle that Bitcoins are released into networks at a steady pace determined by algorithms.

Although once shrouded in ambiguity, Bitcoins threatened to upend (or “disrupt” in Silicon Valley speak) the payment industry. At their core, Bitcoins are just unique strings of information that users mine and typically store on their desktops. The list of companies that accept Bitcoins is growing and includes cable companies, professional sports teams, and even a fringe American political party. According to its proponents, Bitcoins offer lower transaction costs and increased privacy without inflation that affects fiat currency.

Technologies like Bitcoins do not come without interesting legal implications. One of the oft-cited downsides of Bitcoins is that they can facilitate criminal enterprises. In such cases, courts must address what status Bitcoins have in the current economy. The Southern District of New York recently held that Bitcoins were unequivocally a form of currency for purposes of criminal prosecution. In United States v. Murgio et al., Judge Alison Nathan determined Bitcoins are money because “Bitcoins can be accepted as payment for goods and services or bought directly from an exchange with a bank account . . . and are used as a medium of exchange and a means of payment.” By contrast, the IRS classifies virtual currency as property.
Bitcoins are uncertain, volatile, and complex, but they continue to be accepted as currency and show no signs of fading away. Going forward, the judiciary will need to streamline its treatment of Bitcoins.


Recent Ninth Circuit Ruling an Important One for State and Local Governments Seeking to Regulate Genetically Modified Plants

Jody Ferris, Note & Comment Editor

Genetically modified plants (GMOs) are and have always been a hot topic in agriculture and food policy.  Since they were first developed, groups have been lobbying at various levels of government to impose regulations on how they are grown or to have them banned outright. A noteworthy decision has come down for those following legal challenges to GMO regulation. In Alika Atay et al. v. County of Maui et al., the Ninth Circuit court in Hawaii has ruled that state and local governments may regulate the production of GMOs in their jurisdictions.

The original suit was filed by GMO proponents after the County of Maui enacted a ban on genetically modified crops.  The court held that federal regulation of GMOs does not preempt state and local regulation after the variety is commercialized. This means that the United States Department of Agriculture holds jurisdiction over all GMO varieties prior to commercialization, which is the period during development and testing before the variety is sold on the market. According to the Ninth Circuit, after the variety is commercialized, however, state and local governments are free to enact regulations, including outright bans of GMO production, without the need to worry about federal preemption.

Interestingly, the county regulations in Hawaii that were at issue in the suit were nonetheless stricken down by the court because the State of Hawaii already has a comprehensive regulatory scheme which the court held to preempt county GMO regulations.  This outcome disappointed local environmental and anti-GMO groups due to their support of the new county level GMO restrictions.  However, the decision will help clarify the respective regulatory responsibilities between individual counties and the State of Hawaii. Despite the disappointment of these groups, the decision that there is no federal preemption on regulation of commercialized GMO varieties is an important one for many of the states in the Ninth Circuit, as there are counties in Washington and California, for example, which have also enacted bans on GMO production.

This decision will likely be an encouraging one for states wishing to enact their own regulations for how GMO varieties are grown and handled.  It is also encouraging for individual counties who wish to enact GMO bans or county level regulations, should state level regulations not be preemptive.  It will certainly be interesting to follow how state and local governments structure any future regulatory activities in light of this ruling.