Tax Software: Where Automation Falls Short

Kirk Johnson, MJLST Staffer

 

With the rise of automated legal technologies, sometimes we assume that any electronic automation is good. Unfortunately, that doesn’t translate so well for extremely complicated fields such as tax. This post will highlight the flaws in automated tax software and hopefully make the average taxpayer think twice before putting all of their faith in the hands of a program.

Last tax season, the Internal Revenue Service (“IRS”) awarded its Volunteer Income Tax Assistance (“VITA”) and Tax Counseling for the Elderly (“TCE”) contract to the tax software Taxslayer. For many low income taxpayers using these services, Taxslayer turned out to be a double-edged sword. The software failed to account for the Affordable Care Act’s tax penalty for uninsured individuals resulting in a myriad of incorrect returns. The burden was then thrust upon the taxpayers to file amended returns if they were even aware they were affected by the miscalculations. This is hardly the first time a major tax preparation software miscalculated returns.

American taxpayers, I ask you this: at what point does the headache of filing your own 1040 or the heartache of paying a CPA to prepare your return for you outweigh the risks associated with automated tax preparation services? The answer ultimately lies with the complication of your tax life, but the answer is a resounding “maybe.” The National Society of Accountants surveyed the market and found that the average cost of a 1040 without itemized deductions is $176 (up from $152 in 2014) while the preparation of a 1040 with itemized deductions and accompanying state tax return to be $273 (up from $261 in 2014). Many taxpayers can find a service like TurboTax or H&R Block if they make less than $64,000 per year (enjoy reading the terms of service to find additional state filing fees, the cost of unsupported forms, and more!). Taxpayers making less than $54,000 or 60 years or older can take advantage of the VITA program, a volunteer tax preparation service funded by the IRS. Filing your own 1040: priceless.

When a return is miscalculated, it’s up to the taxpayer to file an amended return lest the IRS fixes your return for you, penalizes you, charges you interest on the outstanding balance, and retains future returns to pay off the outstanding debt. I assume that for many people using software, your intentions are to avoid the hassle of doing your own math and reading through IRS publications on a Friday night. Most software will let you amend your return online, but only for the current tax year. Any older debt will need to be taken care of manually or with the assistance of a preparer.

VITA may seem like a great option for anyone under their income limits. Taxpayers with children can often take advantage of refundable credits that VITA volunteers are very experienced with. However, the Treasury Inspector General reported that only 39% of returns filed by VITA volunteers in 2011 were accurate. Even more fun, the current software the volunteers are using enjoyed three data breaches in the 2016 filing season. While the IRS is one of the leading providers of welfare in the United States (feeling more generous some years than they ought to be), the low income taxpayer may have more luck preparing their own returns.

Your friendly neighborhood CPA hopefully understands IRS publications, circulations, and revenue rulings better than the average tax software. Take this anecdotal story from CBS: TurboTax cost her $111.90, refunded her a total of $3,491 in federal and state taxes, and received a total of $3,379.10. Her friendly neighborhood CPA charged a hefty $400, received $3,831 in federal and state refunds, and received a total of $3,431. Again, not everyone is in the same tax position as this taxpayer, but the fact of the matter is that tax automation doesn’t always provide a cheaper, more convenient solution than the alternative. Your CPA should be able to interpret doubtful areas of tax law much more effectively than an automated program.

Filing yourself is great… provided, of course, you don’t trigger any audit-prone elements in IRS exams. You also get to enjoy a 57% accuracy rate at the IRS help center. Perhaps you enjoy reading the fabled IRS Publication 17 – a 293 page treatise filled with Treasury-favored tax positions or out-of-date advice. However, if you’re like many taxpayers in the United States, it might make sense to fill out a very simple 1040 with the standard deduction yourself. It’s free, and as long as you don’t take any outrageous tax positions, you may end up saving yourself the headache of dealing with an amended return from malfunctioning software.

My fellow taxpayers that read an entire post about tax preparation in November, I salute you. There is no simple answer when it comes to tax returns; however, in extremely complex legal realms like tax, automation isn’t necessarily the most convenient option. I look forward to furrowing my brow with you all this April to complete one of the most convoluted forms our government has to offer.


United States v. Microsoft Corp.: A Chance for SCOTUS to Address the Scope of the Stored Communications Act

Maya Digre, MJLST Staffer

 

On October 16th, 2017 the United States Supreme Court granted the Federal Government’s petition for certiorari in United States v. Microsoft Corp. The case is about a warrant issued to Microsoft that ordered it to seize and produce the contents of a customer’s e-mail account that the government believed was being used in furtherance of narcotics trafficking. Microsoft produced the non-content information that was stored in the U.S., but moved to quash the warrant with respect to the information that was stored abroad in Ireland. Microsoft claimed that the only way to access the information was through the Dublin data center, even though this data center could also be accessed by their database management program located at some of their U.S. locations.

 

The district court of New York determined that Microsoft was in civil contempt for not complying with the warrant. The 2nd Circuit reversed, stating that “Neither explicitly or implicitly does the statute envision the application of its warrant provision overseas” and “the application of the Act that the government proposes – interpreting ‘warrant’ to require a service provider to retrieve material from beyond the borders of the United States – would require us to disregard the presumption against extraterritoriality.” The court used traditional tools of statutory interpretation in the opinion including plain meaning, presumption against extraterritoriality, and legislative history.

 

The issue in the case, according to ScotusBlog is “whether a United States provider of email services must comply with a probable-cause-based warrant issued under 18 U.S.C. § 2703 by making disclosure in the United States of electronic communications within that provider’s control, even if the provider has decided to store that material abroad.” Essentially, the dispute centers on the scope of the Stored Communications Act (“SCA”) with respect to information that is stored abroad. The larger issue is the tension between international privacy laws, and the absolute nature of warrants issued in the United States. According to the New York Times, “the case is part of a broader clash between the technology industry and the federal government in the digital age.”

 

I think that the broader issue is something that the Supreme Court should address. However, I am not certain that this is the best case for the court. The fact that Microsoft can access the information from data centers in the United States with their database management program seems to weaken their claim. The case may be stronger for companies who cannot access information that they store abroad from within the United States. Regardless of this weakness, the Supreme Court should rule in favor of the State to preserve the force of warrants of this nature. It was Microsoft’s choice to store the information abroad, and I don’t think the choices of companies should impede legitimate crime-fighting goals of the government. Additionally, if the Court ruled that the warrant does not reach information that is stored abroad, this may incentivize companies to keep their information out of the reach of a U.S. warrant by storing it abroad. This is not a favorable policy choice for the Supreme Court to make; the justices should rule in favor of the government.

 

Unfortunately, the Court will not get to make a ruling on this case after Microsoft decided to drop it following the DOJ’s agreement to change its policy.


Microsoft Triumphs in Fight to Notify Users of Government Data Requests

Brandy Hough, MJLST Staffer

 

This week, Microsoft announced it will drop its secrecy order lawsuit against the U.S. government after the Deputy U.S. Attorney General issued a binding policy limiting the use and term of protective orders issued pursuant to 18 U.S.C. §2705(b) of the Electronic Communications Privacy Act of 1986 (“ECPA”), also referred to as the Stored Communications Act (“SCA”).

 

The ECPA governs requests to obtain user records and information from electronic service providers. “Under the SCA, the government may compel the disclosure of . . . information via subpoena, a court order under 18 U.S.C. § 2703(d), or a search warrant.” Pursuant to 18 U.S.C. § 2705(b), a government entity may apply for an order preventing a provider from notifying its user of the existence of the warrant, subpoena, or court order. Such an order is to be granted only if “there is reason to believe” that such notification will result in (1) endangering an individual’s life or physical safety; (2) flight from prosecution; (3) destruction of or tampering with evidence; (4) intimidation of witnesses; or (5) seriously jeopardizing an investigation or delaying a trial.

 

Microsoft’s April 2016 lawsuit stemmed from what it viewed as routine overuse of protective orders accompanying government requests for user data under the ECPA, often without fixed end dates. Microsoft alleged both First and Fourth Amendment violations, arguing that “its customers have a right to know when the government obtains a warrant to read their emails, and . . . Microsoft has a right to tell them.” Many technology leaders, including Apple, Amazon, and Twitter, signed amicus briefs in support of Microsoft’s efforts.

 

The Deputy Attorney General’s October 19th memo states that “[e]ach §2705(b) order should have an appropriate factual basis and each order should extend only as long as necessary to satisfy the government’s interest.” It further outlines steps that prosecutors applying for §2705(b) orders must follow, including one that states “[b]arring exceptional circumstances, prosecutors filing § 2705(b) applications may only seek to delay notice for one year or less.” The guidelines apply prospectively to applications seeking protective orders filed on or after November 18, 2017.

 

Microsoft isn’t sitting back to celebrate its success; instead, it is continuing its efforts outside the courtroom, pushing for Congress to amend the ECPA to address secrecy orders.

 

Had the case progressed without these changes, the court should have ruled in favor of Microsoft. Because the way § 2705(b) of the SCA was written, it allowed the government to exploit the “vague legal standards . . . to get indefinite secrecy orders routinely, regardless of whether they were even based on the specifics of the investigation at hand.”This behavior violated both the First Amendment – by restraining Microsoft’s speech based on “purely subjective criteria” rather than requiring the government to “establish that the continuing restraint on speech is narrowly tailored to promote a compelling interest”  – and the Fourth Amendment – by not allowing users to know if the government searches and seizes their cloud-based property, in contrast to the way Fourth Amendment rights  are afforded to information stored in a person’s home or business. The court therefore should have declared, as Microsoft urged, that § 2705(b) was “unconstitutional on its face.”

 


“Gaydar” Highlights the Need for Cognizant Facial Recognition Policy

Ellen Levish, MJLST Staffer

 

Recently, two Stanford researchers made a frightening claim; computers can use facial recognition algorithms to identify people as gay or straight.

 

One MJLST blog tackled facial recognition issues before back in 2012. Then, Rebecca Boxhorn posited that we shouldn’t worry too much, because “it is easy to overstate the danger” of emerging technology. In the wake of the “gaydar,” we should re-evaluate that position.

 

First, a little background. Facial recognition, like fingerprint recognition, relies on matching a subject to given standards. An algorithm measures points on a test-face, compares it to a standard face, and determines if the test is a close fit to the standard. The algorithm matches thousands of points on test pictures to reference points on standards. These test points include those you’d expect: nose width, eyebrow shape, intraocular distance. But the software also quantifies many “aspects of the face we don’t have words for.” In the case of the Stanford “gaydar,” researchers modified existing facial recognition software and used dating profile pictures as their standards. They fed in test pictures, also from dating profiles, and waited.

 

Recognizing patterns in these measurements, the Stanford study’s software determined if a test face was more like a standard “gay” or “straight” face. The model was accurate up to 91 percent of the time. That is higher than just chance, and far beyond human ability.

 

The Economist first broke the story on this study. As expected, it gained traction. Hyperbolic headlines littered tech blogs and magazines. And of course, when the dust settled, the “gaydar” scare wasn’t that straightforward. The “gaydar” algorithm was simple, the study was a draft posted online, and the results, though astounding, left a lot of room for both statistical and socio-political criticism. The researchers stated that their primary purpose in pursuing this inquiry was to “raise the alarm” about the dangers of facial recognition technology.

 

Facial recognition has become much more commonplace in recent years. Governments worldwide openly employ it for security purposes. Apple and Facebook both “recognize individuals in the videos you take” and the pictures you post online. Samsung allows smartphone users to unlock their device with a selfie. The Walt Disney Company, too, owns a huge database of facial recognition technology, which it uses (among other things) to determine how much you’ll laugh at movies. These current, commercial uses seem at worst benign and at best helpful. But the Stanford “gaydar” highlights the insidious, Orwellian nature of “function creep,” which policy makers need to keep an eye on.

 

Function creep “is the phenomenon by which a technology designed for a limited purpose may gain additional, unanticipated purposes or functions.” And it poses a major ethical problem for the use of facial recognition software. No doubt inspired developers will create new and enterprising means of analyzing people. No doubt most of these means will continue to be benign and commercial. But we must admit: classification based on appearance and/or affect is ripe for unintended consequences. The dystopian train of thought is easy to follow. It begs that we consider normative questions about facial recognition technology.

 

Who should be allowed to use facial recognition technologies? When are they allowed to use it? Under what conditions can users of facial technology store, share, and sell information?

 

The goal should be to keep facial recognition technology from doing harm. America has a disturbing dearth of regulation designed to protect citizens from ne’er-do-wells who have access to this technology. We should change that.

 

These normative questions can guide our future policy on the subject. At the very least, they should help us start thinking about cogent guidelines for the future use of facial recognition technology. The “gaydar” might not be cause for immediate alarm, but its implications are certainly worth a second thought. I’d recommend thinking on this sooner, rather than later.


Act Fast! Get Access to Your Genetic Past, Present, and Future for One Low, Low Price

Hannah Mosby, MJLST Staffer

 

It’s Saturday morning, and you’re flipping through channels on your TV when you hear the familiar vocal inflections of an infomercial. For three monthly installments of $19.99, you can get access to your complete genetic ancestry, and any genetic predispositions that might impact your health—both now and in the future. From the comfort of your couch, you can order a kit, provide a DNA sample, and poof. . . a month or two later, you know everything you could ever want to know about your own genetic makeup. Sounds a little far-fetched, right?

 

Wrong. It’s 2017, and genetic testing kits are not only readily accessible to the public—they’re relatively inexpensive. Curious about whether you’re really German and Irish? Wondering if you—like your mother and her grandmother—might develop Alzheimer’s disease? Companies like 23andMe have you covered. The company advertises kits that cover both ancestry and certain health risks, and has recorded the sale of over 2 million testing kits. Maybe you’ve heard your friend, your coworker, or your sister talking about these genetic tests—or maybe they’ve already ordered their own kit.

 

What they’re probably not talking about, however, is the host of bioethical implications this sort of at-home genetic testing has. To some, ancestry may be cocktail party conversation, but to others, heritage is an enormous component of their personal identity. Purchasing a genetic testing kit may mean suddenly finding out that your ancestry isn’t what you thought it was, and consumers may or may not understand the emotional and psychological implications of these kinds of results. Genetic health risks present an even bigger ethical challenge—it’s all too easy to mistake the word “predisposition” for a diagnosis. Unless consumers are thoroughly educated about the implications of specific gene variants, companies like 23andMe aren’t providing useful health data—they’re providing enormously impactful information that the average consumer may not be equipped to understand or cope with.

 

It’s also easy to forget about the data privacy concerns. According to 23andMe’s commercial website, “23andMe gives you control over your genetic information. We want you to decide how your information is used and with whom it is shared.” That sounds nice—but is that “meaningful choice” masked in legal-ese? Existing federal regulation bars discriminatory use of genetic information by insurance companies and employers, but how does that affect other entities, if it does at all? Third-party access to this highly personal information is under-regulated, and it can’t be adequately safeguarded by “consent” without thoroughly explaining to consumers the potential implications of third-party disclosure.

 

It’s easy to get wrapped up in new and exciting biotechnology—especially when it’s publicly accessible. And we should be excited. . . accessibility and transparency in a field as intimidating as genetics can be is worth celebrating. Further, genetic testing brings with it a host of preventative health and personal benefits. However, it also raises some ethical and regulatory concerns, and it’s important to make sure our enthusiasm—as consumers, but also as entrepreneurs—for genetic technology doesn’t outpace the regulatory systems available to govern it.


Apple Faces Trademark Lawsuit Regarding Its iPhone X Animoji Feature

Kaylee Kruschke, MJLST Staffer

 

The Japanese company, emonster k.k., sued Apple in the U.S. on Wednesday, Oct. 18, 2017, claiming that Apple infringed emoster’s Animoji trademark with the iPhone X Animoji feature.

 

But first, what even is an Animoji? According it a Time article, an Animoji is an animal emoji that you can control with your face, and send to your friends. You simply pick an emoji, point the camera at your face, and speak. The Animoji captures your facial expression and voice. However, this technology has yet to reach consumer hands. Apple’s website says that the iPhone X, with the Animoji feature, will be available for preorder on Oct. 27, and will be available for purchase Nov. 3.

 

So why is Apple being sued over this? Well, it’s not the actual technology that’s at issue. It’s the name Animoji. emonster’s complaint states that Enrique Bonansea created the Animoji app in 2014. This app allowed users to customize moving text and images and send them in messages. The United States Patent and Trademark Office registered Animoji to Bonansea on March 31, 2015, who later assigned the trademark to emoster in Aug. 2017, according to the complaint. Bonansea also claims that he received requests from companies, that he believes were fronts for Apple, to sell the trademark in Animoji. But these requests were denied, according to the complaint.

 

The complaint also provides more information that sheds light on the fact that Apple probably knew it was infringing emonster’s trademark in Animoji. The day before Apple announced its iPhone X and the Animoji feature, Apple filed a petition with the United States Patent and Trademark Office requesting that the office cancel the Animoji trademark because emonster, Inc. didn’t exist at the time of the application for the trademark. This was a simple mistake and the paperwork should have said emonster k.k. instead of emonster, Inc.; emonster was unable to fix the error because the cancellation proceeding was already pending. To be safe, emonster applied again for registration of the trademark Animoji in Sept. 2017, but this time under the correct name, emonster k.k..

 

Additionally, Apple knew about emonster’s app because it was available on the Apple App Store. Apple also helped emonster remove apps that infringed emonster’s trademark, the complaint stated. Nevertheless, Apple still went forward with using Animoji as the name for it’s new technology.

 

The complaint also alleges that emonster did send Apple a cease-and-desist letter, but Apple continued to use name Animoji for its new technology. emonster requests that Apple be enjoined from using the name Animoji, and claims that it is also entitled to recover Apple’s profits from using the name, any ascertainable damages emonster has, and the costs emonster incurs from the suit.

 

It’s unclear what this means for Apple and the release of the iPhone X, which is in the very near future. At this time, Apple has yet to comment on the lawsuit.


Sex Offenders on Social Media?!

Young Choo, MJLST Staffer

 

A sex offender’s access to social media is problematic nowadays on social media, especially considering the vast amount of dating apps you can use to meet other users. Crimes committed through the use of dating apps (such as Tinder and Grindr) include rape, child sex grooming and attempted murder. These statistics have increased seven-fold in just two years. Although sex offenders are required to register with the State, and individuals can get accesses to each state’s sex offender registry online, there are few laws and regulations designed to combat this specific situation in which minors or other young adults can become victims of sex crimes. A new dating app called “Gastby” was introduced to resolve this situation. When new users sign up for Gatsby, they’re put through a criminal background check, which includes sex offender registries.

Should sex-offenders even be allowed to get access to the social media? Recent Supreme Court case, Packingham v. North Carolina, decided that a North Carolina law preventing sex offenders getting access to a commercial social networking web site is unconstitutional due to the First Amendment’s Free Speech Clause. The Court emphasized the fact that accessing to the social media is vital for citizens in the exercise of First Amendment rights. The North Carolina law was struck down mainly because it wasn’t “narrowly tailored to serve a significant governmental interest,” but the Court noted that this decision does not prevent a State from enacting more specific laws to address and ban certain activity of sex offender on social media.

The new online dating app, Gatsby, cannot be the only solution to the current situation. There are already an estimated 50 million people using Tinder in the world and the users do not have a method of determining whether their matches may be sex offenders. New laws narrowly-tailored to address the situation, perhaps requiring dating apps to do background checks on users or an alternative method to prevent sex offenders from utilizing the dating app, might be necessary to reduce the increasing number of crimes through the dating apps.


Tribal Sovereign Immunity May Shield Pharmaceutical Patent Owner From PTAB Inter Partes Review

Brenden Hoffman, MJLST Staffer

 

The Eleventh Amendment to the United States Constitution provides for State Sovereign Immunity, stating: “The Judicial power of the United States shall not be construed to extend to any suit in law or equity, commenced or prosecuted against one of the United States by Citizens of another State, or by Citizens or Subjects of any Foreign State.”   Earlier this year, the Patent Trial and Appeals Board dismissed three Inter Partes Review proceedings against the University of Florida, based on their claim of State Sovereign Immunity. See Covidien LP v. University of Florida Research Foundation Inc., Case Nos. IPR 2016-01274; -01275, and -01276 (PTAB January 25, 2017).

Early last month, the pharmaceutical company Allergan announced that it had transferred its patent rights for the blockbuster drug Restasis to the Saint Regis Mohawk Tribe. Restasis is Allergan’s second most profitable drug (Botox is the first), netting $336.4 million in the second quarter of 2017.  Under this agreement, this tribe was paid $13.75 Million initially and will receive $15 Million in annual royalties for every year that the patents remain valid. Bob Bailey, Allergan’s Executive VP and Chief Legal Officer, indicated that they were approached by the St. Regis tribe and believe that tribal sovereign immunity should shield the patents from pending IPRs, stating “The Saint Regis Mohawk Tribe and its counsel approached Allergan with a sophisticated opportunity to strengthen the defense of our RESTASIS® intellectual property in the upcoming inter partes review proceedings before the Patent Trial and Appeal Board… Allergan evaluated this approach closely, with expert counsel in patent and sovereign immunity law. This included a thorough review of recent case law such as Covidien LP v. University of Florida Research Foundation Inc. and Neochord, Inc. v. University of Maryland, in which the PTAB dismissed IPR proceedings against the universities based upon their claims of sovereign immunity.”

IPRs are highly controversial.  The United States Supreme Court recently granted cert. in Oil States Energy Services, LLC v. Greene’s Energy Group, LLC  to determine “whether inter partes review, an adversarial process used by the Patent and Trademark Office (PTO) to analyze the validity of existing patents, violates the Constitution by extinguishing private property rights through a non-Article III forum without a jury.” Until this issue is resolved, IPRs will continue to be by companies such as Allergan seeking to protect their patent rights.  Over the past few years, hedge fund manager Kyle Bass made headlines as a “reverse troll,” by filing IPRs against pharmaceutical companies while simultaneously shorting their stocks. Bailey has stated that “the IPR process has been a thorn in our side…We get a call from reverse trolls on a regular basis. Now we have an alternative.” This move has been well regarded by many critical of IPRs, including an October 9, 2017 post on ipwatchdog.com titled “Native Americans Set to Save the Patent System.”  In addition, the St. Regis Mohawk tribe has indicated that these types of arrangements can help the tribe generate much-needed capital for housing, education, healthcare and welfare, without requiring the tribe to give up any land or money.

However, this arrangement between Allergan and the St. Regis Mohawk tribe has attracted strong criticism from others.  Mylan Pharmaceuticals, a party in the IPR proceedings challenging multiple Allergan patents on Restasis, has called this transfer a “sham” and made comparisons to racketeering cases with lending fraud.  “Allergan Pulls a Fast One” on the Science Translational Medicine Blog states, “‘The validity of your patents is subject to review, unless you pay off some Indian tribe’ does not seem like a good way to run an intellectual property system,” this is a “slimy legal trick,” and “this deal smells.” He suggests that “legal loopholes” like this sully the whole pharmaceutical industry look bad and that this will force Congress to take action.  

In fact, U.S. Senator Claire McCaskill, the top-ranking Democrat on the Homeland Security and Governmental Affairs Committee, has already written a letter to the Pharmaceutical Research and Manufacturers of America urging  them to review “whether the recent actions Allergan has taken are consistent with the mission of your organization.”  She believes that “This is one of the most brazen and absurd loopholes I’ve ever seen, and it should be illegal…PhRMA can and should play a role in telling its members that this action isn’t appropriate, and I hope they do that.”  On October 5, 2017, McCaskill introduced a bill to the Senate “To abrogate the sovereign immunity of Indian tribes as a defense in inter partes review of patents.”


Mechanical Curation: Spotify, Archillect, Algorithms, and AI

Jon Watkins, MJLST Staffer

 

A great deal of attention has been paid recently to artificial intelligence. This CGPGrey YouTube video is typical of much modern thought on artificial intelligence. The technology is incredibly exciting- until it threatens your job. This train of thought has led many, including the video above, to search for kinds of jobs which are unavoidably “human,” and thereby safe.

 

However, any feeling of safety that lends may be illusory. AI programs like Emily Howell, which composes sheet music, and Botnik, which writes jokes and articles, are widespread at this point. What these programs produce is increasingly indistinguishable from human-created content- not to mention increasingly innovative. Take, as another example, Harold Cohen’s comment on his AARON drawing program: “[AARON] generates objects that hold their own more than adequately, in human terms, in any gathering of similar, but human-produced, objects. . . It constitutes an existence proof of the power of machines to do some of the things we had assumed required thought. . . and creativity, and self-awareness.”

 

Thinking about what these machines create brings up more questions than answers. At what point is a program independent from its creator? Is any given “AI” actually creating works by itself, or is the author of the AI creating works through a proxy? The answer to these questions are enormously important, and any satisfying answer must have both legal and technical components.

 

To make the scope of these questions more manageable, let’s limit ourselves to one specific subset of creative work- a subset which is absolutely filled with “AI” at the moment- curation. Curation is the process of sorting through masses of art, music, or writing for the content that might be worth something to you. Curators have likely been around as long as humans have been collecting things, but up until recently they’ve been human. In the digital era, most people likely carry a dozen curators in their pocket. From Spotify and Pandora’s predictions of the music you might like, to Archillect’s AI mood board, to Facebook’s “People You May Know”, content curation is huge.

 

First, the legal issues. Curated collections are eligible for copyright protection, as long as they exhibit some “minimal degree of creativity.” Feist v. Rural Telephone Co., 499 U.S. 340, 345 (1991). However, as a recent monkey debacle clarified, only human authors are protected by copyright. This is implied by § 102 of the Copyright Act, which states in part that copyright protection subsists “in original works of authorship.” Works of authorship are created by authors, and authors are human. Therefore, at least legally, the author of the AI may be creating works through a proxy. However, as in the monkey case above, some courts may find there is no copyright-eligible author at all. If neither a monkey, nor a human who provides the monkey with creative tools is an author, is a human who provides a computer with creative tools an author? Goldstein v. California, a 1973 Supreme Court case, has been interpreted as standing for the proposition that computer-generated work must include “significant input from an author or user” to be copyright eligible. Does that decision need to be updated for a different era of computers?

 

The answer to this question is where a technical discussion may be helpful, because the answer may involve a simple spectrum of independence.

 

On one end of the spectrum is algorithmic curation which is deeply connected to decisions made by the algorithm’s programmer. If a programmer at Spotify writes a program which recommends I listen to certain songs, because those songs are written by artists I have a history of listening to, the end result (the recommendation) is only separated by two or three steps from the programmer. The programmer creates a rigid set of rules, which the computer implements. This seems to be no less a human work of authorship than a book written on a typewriter. Just as a programmer is separated from the end result by the program, a writer may be separated from the end result by various machinery within the typewriter. The wishes of both the programmer and the writer are carried out fairly directly, and the end results are undoubtedly human works of authorship.

 

More complex AI, however, is often more independent. Take for example Archillect, whose creator stated in an interview “It’s not reflecting my taste anymore . . .I’d say 60 percent of the things [she posts] are not things that I would like and share.” The process involved in Archillect, as described in the same interview, is much more complex than the simple Spotify program outlined above- “Deploying a network of bots that crawl Tumblr, Flickr, 500px, and other image-heavy sites, Archillect hunts for keywords and metadata that she likes, and posts the most promising results. . .  her whole method of curation is based on the relative popularity of her different posts.”

 

While its author undoubtedly influenced Archillect through various programming decisions (which sites to set up bots for, frequency of posts, broad themes), much of what Archillect does is what we would characterize as judgement calls if a human were doing the work. Deeply artistic questions like “does this fit into the theme I’m shooting for?” or “is this the type of content that will be well-received by my target audience?” are being asked and answered solely by Archillect, and are answered- as seen above- differently from how Archillect’s creator would answer them.

Even closer to the “independent” end of the spectrum, however, even more complex attempts at machine curation exist. This set of programs includes some of Google’s experiments, which attempt to make a better curator by employing cutting-edge machine learning technology. This attempt comes from the same company which recently used machine learning to create an AI which taught itself to walk with very little programmer interaction. If the same approaches to AI are shared between the experiments, Google’s attempts at creating a curation AI might result in software more independent (and possibly more worthy of the title of author) than any software yet.


Health in the Fast Lane: FDA’s Effort to Streamline Digital Health Technology Approval

Alex Eschenroeder, MJLST Staffer

 

The U.S. Food and Drug Administration (FDA) is testing out a fast-track approval program to see if it can accommodate the pace of innovation in the technology industry and encourage more ventures into the digital health technology space. Dr. Scott Gottlieb M.D., Commissioner of the FDA, announced the fast-track pilot program—officially named the “Pre-Cert for Software Pilot Program” (Program)—on July 27, 2017. Last week, the FDA announced the names of the nine companies it selected out of more than 100 applicants to participate in the Program. Companies that made it onto the participant list include tech giants such as Apple and Samsung, as well as Verily Life Sciences—a subsidiary of Alphabet, Inc. The FDA also listed smaller startups, indicating that it intends to learn from entities at various stages of development.

The FDA idea that attracted applicants from across the technology industry to the Program is roughly analogous to the TSA Pre-Check Program. With TSA Pre-Check certification, travelers at airports get exclusive access to less intensive pre-boarding security procedures because they submitted to an official background check (among other requirements) well before their trip. Here, the FDA Program completes extensive vetting of participating technology companies well before they bring a specific digital health technology product to market. As Dr. Gottlieb explained in the July Program announcement, “Our new, voluntary pilot program will enable us to develop a tailored approach toward this technology by looking first at the . . . developer, rather than primarily at the product (as we currently do for traditional medical products).” If the FDA determines through its review that a company meets necessary quality standards, it can pre-certify the company. A pre-certified company would then need to submit less information to the FDA “than is currently required before marketing a new digital health tool.” The FDA even proposed the possibility of a pre-certified company skipping pre-market review for certain products, as long as the company immediately started collecting post-market data for FDA to confirm safety and effectiveness.

While “digital health technology” does not have a simple definition, a recently announced Apple initiative illustrates what the term can mean and how the FDA Program could encourage its innovation. Specifically, Apple recently announced plans to undertake a Heart Study in collaboration with Stanford Medicine. Through this study, researchers will use “data from Apple Watch to identify irregular heart rhythms, including those from potentially serious heart conditions like atrial fibrillation.” Positive research results could encourage Apple, which “wants the Watch to be able to detect common heart conditions such as atrial fibrillation”, to move further into FDA regulated territory. Indeed, Apple has been working with the FDA, aside from the Program, to organize the Heart Study. This is a critical development, as Apple has intentionally limited Watch sensors to “fitness trackers and heart rate monitors” to avoid FDA regulation to date. If Apple receives pre-certification through the Program, it could issue updates to a sophisticated heart monitoring app or issue an entirely different diagnostic app with little or no FDA pre-market review. This dynamic would encourage Apple, and companies like it, to innovate in digital health technology and create increasingly sophisticated tools to protect consumer health.