Artificial Intelligence

Mystery Medicine: How AI in Healthcare Is (or Isn’t) Different From Current Medicine

Jack Brooksbank, MJLST Staffer

Artificial Intelligence (AI) is a funny creature. When we say AI, generally we mean algorithms, such as neural networks, that are “trained” based on some initial dataset. This dataset can be essentially anything, such as a library of tagged photographs or the set of rules to a board game. The computer is given a goal, such as “identify objects in the photos” or “win a game of chess.” It then systematically iterates some process, depending on which algorithm is used, and checks the result against the known results from the initial dataset. In the end, the AI finds some pattern— essentially through brute force  —and then uses that pattern to accomplish its task on new, unknown inputs (by playing a new game of chess, for example).

AI is capable of amazing feats. IBM-made Deep Blue famously defeated chess master Gary Kasparov back in 1997, and the technology has only gotten better since. Tesla, Uber, Alphabet, and other giants of the technology world rely on AI to develop self-driving cars. AI is used to pick stocks, to predict risk for investors, spot fraud, and even determine whether to approve a credit card application.

But, because AI doesn’t really know what it is looking at, it can also make some incredible errors. One  neural network AI trained to detect sheep  in photographs instead noticed that sheep tend to congregate in grassy fields. It then applied the “sheep” tag to any photo of such a field, fluffy quadrupeds or no. And when shown a photo of sheep painted orange, it handily labeled them “flowers.” Another cutting-edge AI platform has, thanks to a quirk of the original dataset it was trained on, a known propensity to spot giraffes where none exist. And the internet is full of humorous examples of AI-generated weirdness, like one neural net that invented color names such as  “snowbonk,” “stargoon,” and “testing.”

One area of immense potential for AI applications is healthcare. AIs are being investigated for applications including diagnosing diseases  and aiding in drug discovery. Yet the use of AI raises challenging legal questions. The FDA has been given a statutory mandate to ensure that many healthcare items, such as drugs or medical devices, are safe. But the review mechanisms the agency uses to ensure that drugs or devices are safe generally rely on knowing how the thing under review works. And patients who receive sub-standard care have legal recourse if they can show that they were not treated with the appropriate standard of care.  But AI is helpful essentially because we don’t know how it works—because AI develops its own patterns beyond what humans can spot. The opaque nature of AI could make effective regulatory oversight very challenging. After all, a patient mis-diagnosed by a substandard AI may have no way of proving that the AI was flawed. How could they, when nobody knows how it actually works?

One possible regulatory scheme that could get around this issue is to have AI remain “supervised” by humans. In this model, AI could be used to sift through data and “flag” potential points of interest. A human reviewer would then see what drew the AI’s interest, and make the final decision independently. But while this would retain a higher degree of accountability in the process, it would not really be using the AI to its full potential. After all, part of the appeal of AI is that it could be used to spot things beyond what humans could see. And there would also be the danger that overworked healthcare workers would end up just rubber stamping the computer’s decision, defeating the purpose of having human review.

Another way forward could be foreshadowed by a program the FDA is currently testing for software update approval. Under the pre-cert program, companies could get approval for the procedures they use to make updates. Then, as long as future updates are made using that process, the updates themselves would be subject to a greatly reduced approval burden. For AI, this could mean agencies promulgating standardized methods for creating an AI system—lists of approved algorithm types, systems for choosing the dataset the AI are trained on—and then private actors having to show only that their system has been set up well.

And of course, another option would be to simply accept some added uncertainty. After all, uncertainty abounds in the current healthcare system today, despite our best efforts. For example, Lithium is prescribed to treat bipolar disorder, despite uncertainty in the medical community of how it works. Indeed, the mechanism for many drugs remains mysterious. We know that these drugs work, even if we don’t know how; perhaps using the same standard for AI in medicine wouldn’t really be so different after all.


Artificial Intelligence as Inventors: Who or What Should Get the Patent?

Kelly Brandenburg, MJLST Staffer

Ever since the introduction of electronic computers, innovators across the world have focused on the development of artificial intelligence (“AI”), the goal being to enable machines to act like humans by making decisions and responding to situations. Generally considered to be the first artificial intelligence program, the Logic Theorist was designed in 1955 and enabled a machine to prove mathematical theorems. Since then, people have developed machines that have beat humans in some of the most strategic and intuitive games, such as Chess, Scrabble, Othello, Jeopardy, and Go.

As new innovations are developed, whether in AI or other areas of technology, patents are a common means for the inventors to protect their ideas. However, what happens when the AI technology advances to the point where the machines are making the innovations? Does the protection afforded to human inventions by Article I, Section 8 of the Constitution apply to new AI inventions? While this capability is still to be developed, the questions of patentability and patent ownership have been brought up previously, and will potentially need to be addressed by the United States Patent and Trademark Office (“USPTO”) in the future.

An initial question is whether the invention can even be patented. There are a variety of terms in patent statutes that indicate that the inventor has to be a human in order to get a patent, including “whoever,” “person,” and “inventor.” Therefore, if the invention is developed by a non-human entity, the same patent protection may not be applicable. However, assuming the inventions are patentable, then the next question is who should have the ownership rights to the patent. Should the AI itself get the patent, or should it instead go to the owner of the machine, or maybe to the inventor/programmer of the AI program?

The main purpose of providing patents to inventors is to “promote the progress of science and useful arts” by allowing the inventors to exclusively benefit from their efforts; it is an incentive-based program. From the AI perspective, there would not be much benefit in providing the AI with the exclusive rights of a patent, assuming the AI does not desire the money, recognition, or any other benefit that might come with it. Its innovation is more likely to be due to the natural development of its programming over time, rather than the incentivization of any reward it might get. However, since this technology is still being developed, maybe AI will learn to act similar to humans when it comes to incentives, which would then mean that giving it a patent could induce more innovative efforts.

For owners, depending on how the owner uses and interacts with the AI, the ownership rights of a patent may or may not have its desired effect. If the owner has the desire to use the AI to potentially invent something and exposes it to unique environments or equipment, then perhaps they deserve the exclusive rights to the AI’s patent. However, if the AI just happens to invent something with no direction or intent of the owner, it would not make much sense to reward the owner for exerting no effort.

Lastly, the patent could also go to the initial programmers of the AI. This would also likely depend on whether or not enough effort was put into the development of the AI after its initial programming. When the owner puts in the effort, then the owner might get the patent over the programmer, but if the AI just happens to invent something regardless of what the owner does, then the programmer could have rights to the patent. Again, if programmers would benefit from the AI’s invention, that would incentivize the programmers to further enhance their programs.

Since these specific capabilities are mostly hypothetical at this point, it is impossible to predict exactly how the AI technology is going to advance, and actually work, in the future. However, the technology is definitely changing and getting closer to making AI innovation a reality, and patent law will have to adapt to however it unfolds.


And Then AI Came for the Lawyers…?

Matt McCord, MJLST Staffer

 

Artificial intelligence’s possibility to make many roles redundant has generated no small amount of policy and legal discussion and analysis. Any number of commentators have speculated on AI’s capacity to transform the economy far more substantially than the automation boom of the last half-century; one discussion on ABC’s Q&A described the difference in today’s technology development trends as being “alinear” as opposed to predictable, like the car, a carriage with an engine, supplanting a carriage drawn by a horse.

Technological development has largely helped to streamline law practice and drive new sources of business and avenues for marketing. Yet, AI may be coming for lawyers’ jobs next. A New Zealand firm is working to develop AI augmentation for legal services. The firm, MinterEllisonRuddWatts, looks to be in the early stages of developing this system, having entered into a joint venture agreement to work on development pathways.

The firm claims that the AI would work to reduce the more mundane analytic tasks from lawyers’ workloads, such as contract analysis and document review, but would only result in the labor force having to “reduce,” not be “eliminated.” Yet, the development of law-competent AI may result in massive levels of workforce reduction and transformation: Mills & Reeve’s Paul Knight believes that the adoption will shutter many firms and vastly shrink the need for, in particular, junior lawyers.

Knight couches this prediction in sweetening language, stating that the tasks remaining for lawyers would be “more interesting,” leading to a more efficient, more fulfilled profession engaging in new specialties and roles. Adopting AI on the firm level has clear benefits for firms looking to maximize profit per employee: current-form AI, according to one study, AI is more accurate than many human attorneys in spotting contract issues, and vastly more efficient, completing a 90-minute task in 30 seconds.

Knight, like many AI promoters, claims that the profession, and society at large, should embrace AI’s role in transforming professions by transfiguring labor force requirements, believing AI’s benefits of increasing efficiency and work fulfillment by reducing human interaction with more mundane tasks. These words will likely do little to assuage the nerves of younger, prospective market entrants and attorney specializing in these “more mundane” areas, who may be wondering if AI’s development may eliminate their role from the labor force.

While AI’s mass deployment in the law is currently limited, due in part to high costs, experimental technology, and limited current applications, machine learning, especially recursive learning and adaptation, may bring this development firmly into the forefront of the field unpredictably, quickly, and possibly in the very near future.


An Automated Armageddon

Jacob Barnard, MJLST Staffer

 

In the 1970’s, hundreds of millions of people starved to death – 65 million of them Americans. In the 1980’s, world oil production peaked and it was soon followed by the depletion of all available sources of lead, zinc, tin gold, and silver in 1990. To make matters worse, all computers stopped working on January 1, 2000. Fortunately, we were all put out of our misery when the world ended on December 21, 2012.

But now, after all of that, we must face a new threat. This one comes in the form of (killer)robots. That is correct; now, in addition to immigrants and other countries, robots are stealing our jobs.

Of course, this is not an entirely new threat. The industrial revolution threatened farmers through advancements in agricultural productivity, as well as increasing worker productivity in general. Yet, as economist Walter Williams explains, this was never actually a problem. In the United States, farmers were 90% of the labor force in 1790, but this decreased to 41% in 1900 (and is down to under 3% currently). All this means, however, is that increases in productivity allowed individuals who would have otherwise been farmers to seek employment in other fields (no pun intended).

Say’s law, commonly misunderstood as “supply creates its own demand,” can be more correctly understood through the insight of W.H. Hutt: “All power to demand is derived from production and supply. . . . The process of supplying—i.e., the production and appropriate pricing of services or assets for replacement or growth—keeps the flow of demands flowing steadily or expanding.” As each person becomes more productive, therefore, they are able to demand more in return for their increased production, which allows others to maintain their employment as well.

Empirical studies on the current effects of automation support this view of the situation as well. A 2017 study by Greggory, Salomons, and Zierahn with the Mannheim Centre for European Economic Research found that routine-replacing technological change accounted for a net increase in labor demand of about 11.6 million jobs across 27 EU countries from 1999-2010 (in comparison to a total growth of 23 million jobs over the same period). In 2015, Graetz and Michaels, working with the Centre for Economic Performance, found “the increased use of robots raised countries’ average growth rates by about 0.37 percentage points. We also find that robots increased both wages and total factor productivity. While robots had no significant effect on total hours worked, there is some evidence that they reduced the hours of both low-skilled and middle-skilled workers.”

This last point is what may create an actual problem. Automation is unlikely to eliminate employment as we know it, but it will likely require a shift away from low-skilled labor. Like the farmers of the 18th and 19th centuries, many low-skilled workers may find their specific jobs being eliminated in favor of more technical employment. If people are given incentive to avoid this shift, it may result in unnecessary hardship for low-skilled workers.

Predictably, this has led some to advocate exactly that. A universal basic income, as suggested by Elon Musk and others fearing a robot takeover, would only give low-skilled workers greater incentive to avoid investing in their educations, slowing the increase in human capital that would maintain high levels of employment as automation becomes more prevalent.

A more reasonable policy recommendation would be to amend the tax code to reduce the disincentive to enter new fields of employment. Currently, education expenses for entering a new trade or business are not deductible. In addition, expenses incurred seeking employment in fields other than an employee’s current trade or business are not deductible because they are not “carrying on” the trade or business when they incur the expense. Simply allowing these two deductions would make it easier for workers to adapt to the changing demands of an evolving economy.

Even if these changes are not enough and the Luddites are correct about robots stealing all of our jobs, there still would not be a problem because there will be plenty of lucrative work available as robot-smashers.


Mechanical Curation: Spotify, Archillect, Algorithms, and AI

Jon Watkins, MJLST Staffer

 

A great deal of attention has been paid recently to artificial intelligence. This CGPGrey YouTube video is typical of much modern thought on artificial intelligence. The technology is incredibly exciting- until it threatens your job. This train of thought has led many, including the video above, to search for kinds of jobs which are unavoidably “human,” and thereby safe.

 

However, any feeling of safety that lends may be illusory. AI programs like Emily Howell, which composes sheet music, and Botnik, which writes jokes and articles, are widespread at this point. What these programs produce is increasingly indistinguishable from human-created content- not to mention increasingly innovative. Take, as another example, Harold Cohen’s comment on his AARON drawing program: “[AARON] generates objects that hold their own more than adequately, in human terms, in any gathering of similar, but human-produced, objects. . . It constitutes an existence proof of the power of machines to do some of the things we had assumed required thought. . . and creativity, and self-awareness.”

 

Thinking about what these machines create brings up more questions than answers. At what point is a program independent from its creator? Is any given “AI” actually creating works by itself, or is the author of the AI creating works through a proxy? The answer to these questions are enormously important, and any satisfying answer must have both legal and technical components.

 

To make the scope of these questions more manageable, let’s limit ourselves to one specific subset of creative work- a subset which is absolutely filled with “AI” at the moment- curation. Curation is the process of sorting through masses of art, music, or writing for the content that might be worth something to you. Curators have likely been around as long as humans have been collecting things, but up until recently they’ve been human. In the digital era, most people likely carry a dozen curators in their pocket. From Spotify and Pandora’s predictions of the music you might like, to Archillect’s AI mood board, to Facebook’s “People You May Know”, content curation is huge.

 

First, the legal issues. Curated collections are eligible for copyright protection, as long as they exhibit some “minimal degree of creativity.” Feist v. Rural Telephone Co., 499 U.S. 340, 345 (1991). However, as a recent monkey debacle clarified, only human authors are protected by copyright. This is implied by § 102 of the Copyright Act, which states in part that copyright protection subsists “in original works of authorship.” Works of authorship are created by authors, and authors are human. Therefore, at least legally, the author of the AI may be creating works through a proxy. However, as in the monkey case above, some courts may find there is no copyright-eligible author at all. If neither a monkey, nor a human who provides the monkey with creative tools is an author, is a human who provides a computer with creative tools an author? Goldstein v. California, a 1973 Supreme Court case, has been interpreted as standing for the proposition that computer-generated work must include “significant input from an author or user” to be copyright eligible. Does that decision need to be updated for a different era of computers?

 

The answer to this question is where a technical discussion may be helpful, because the answer may involve a simple spectrum of independence.

 

On one end of the spectrum is algorithmic curation which is deeply connected to decisions made by the algorithm’s programmer. If a programmer at Spotify writes a program which recommends I listen to certain songs, because those songs are written by artists I have a history of listening to, the end result (the recommendation) is only separated by two or three steps from the programmer. The programmer creates a rigid set of rules, which the computer implements. This seems to be no less a human work of authorship than a book written on a typewriter. Just as a programmer is separated from the end result by the program, a writer may be separated from the end result by various machinery within the typewriter. The wishes of both the programmer and the writer are carried out fairly directly, and the end results are undoubtedly human works of authorship.

 

More complex AI, however, is often more independent. Take for example Archillect, whose creator stated in an interview “It’s not reflecting my taste anymore . . .I’d say 60 percent of the things [she posts] are not things that I would like and share.” The process involved in Archillect, as described in the same interview, is much more complex than the simple Spotify program outlined above- “Deploying a network of bots that crawl Tumblr, Flickr, 500px, and other image-heavy sites, Archillect hunts for keywords and metadata that she likes, and posts the most promising results. . .  her whole method of curation is based on the relative popularity of her different posts.”

 

While its author undoubtedly influenced Archillect through various programming decisions (which sites to set up bots for, frequency of posts, broad themes), much of what Archillect does is what we would characterize as judgement calls if a human were doing the work. Deeply artistic questions like “does this fit into the theme I’m shooting for?” or “is this the type of content that will be well-received by my target audience?” are being asked and answered solely by Archillect, and are answered- as seen above- differently from how Archillect’s creator would answer them.

Even closer to the “independent” end of the spectrum, however, even more complex attempts at machine curation exist. This set of programs includes some of Google’s experiments, which attempt to make a better curator by employing cutting-edge machine learning technology. This attempt comes from the same company which recently used machine learning to create an AI which taught itself to walk with very little programmer interaction. If the same approaches to AI are shared between the experiments, Google’s attempts at creating a curation AI might result in software more independent (and possibly more worthy of the title of author) than any software yet.