Articles by mjlst

Microsoft Triumphs in Fight to Notify Users of Government Data Requests

Brandy Hough, MJLST Staffer

 

This week, Microsoft announced it will drop its secrecy order lawsuit against the U.S. government after the Deputy U.S. Attorney General issued a binding policy limiting the use and term of protective orders issued pursuant to 18 U.S.C. §2705(b) of the Electronic Communications Privacy Act of 1986 (“ECPA”), also referred to as the Stored Communications Act (“SCA”).

 

The ECPA governs requests to obtain user records and information from electronic service providers. “Under the SCA, the government may compel the disclosure of . . . information via subpoena, a court order under 18 U.S.C. § 2703(d), or a search warrant.” Pursuant to 18 U.S.C. § 2705(b), a government entity may apply for an order preventing a provider from notifying its user of the existence of the warrant, subpoena, or court order. Such an order is to be granted only if “there is reason to believe” that such notification will result in (1) endangering an individual’s life or physical safety; (2) flight from prosecution; (3) destruction of or tampering with evidence; (4) intimidation of witnesses; or (5) seriously jeopardizing an investigation or delaying a trial.

 

Microsoft’s April 2016 lawsuit stemmed from what it viewed as routine overuse of protective orders accompanying government requests for user data under the ECPA, often without fixed end dates. Microsoft alleged both First and Fourth Amendment violations, arguing that “its customers have a right to know when the government obtains a warrant to read their emails, and . . . Microsoft has a right to tell them.” Many technology leaders, including Apple, Amazon, and Twitter, signed amicus briefs in support of Microsoft’s efforts.

 

The Deputy Attorney General’s October 19th memo states that “[e]ach §2705(b) order should have an appropriate factual basis and each order should extend only as long as necessary to satisfy the government’s interest.” It further outlines steps that prosecutors applying for §2705(b) orders must follow, including one that states “[b]arring exceptional circumstances, prosecutors filing § 2705(b) applications may only seek to delay notice for one year or less.” The guidelines apply prospectively to applications seeking protective orders filed on or after November 18, 2017.

 

Microsoft isn’t sitting back to celebrate its success; instead, it is continuing its efforts outside the courtroom, pushing for Congress to amend the ECPA to address secrecy orders.

 

Had the case progressed without these changes, the court should have ruled in favor of Microsoft. Because the way § 2705(b) of the SCA was written, it allowed the government to exploit the “vague legal standards . . . to get indefinite secrecy orders routinely, regardless of whether they were even based on the specifics of the investigation at hand.”This behavior violated both the First Amendment – by restraining Microsoft’s speech based on “purely subjective criteria” rather than requiring the government to “establish that the continuing restraint on speech is narrowly tailored to promote a compelling interest”  – and the Fourth Amendment – by not allowing users to know if the government searches and seizes their cloud-based property, in contrast to the way Fourth Amendment rights  are afforded to information stored in a person’s home or business. The court therefore should have declared, as Microsoft urged, that § 2705(b) was “unconstitutional on its face.”

 


“Gaydar” Highlights the Need for Cognizant Facial Recognition Policy

Ellen Levish, MJLST Staffer

 

Recently, two Stanford researchers made a frightening claim; computers can use facial recognition algorithms to identify people as gay or straight.

 

One MJLST blog tackled facial recognition issues before back in 2012. Then, Rebecca Boxhorn posited that we shouldn’t worry too much, because “it is easy to overstate the danger” of emerging technology. In the wake of the “gaydar,” we should re-evaluate that position.

 

First, a little background. Facial recognition, like fingerprint recognition, relies on matching a subject to given standards. An algorithm measures points on a test-face, compares it to a standard face, and determines if the test is a close fit to the standard. The algorithm matches thousands of points on test pictures to reference points on standards. These test points include those you’d expect: nose width, eyebrow shape, intraocular distance. But the software also quantifies many “aspects of the face we don’t have words for.” In the case of the Stanford “gaydar,” researchers modified existing facial recognition software and used dating profile pictures as their standards. They fed in test pictures, also from dating profiles, and waited.

 

Recognizing patterns in these measurements, the Stanford study’s software determined if a test face was more like a standard “gay” or “straight” face. The model was accurate up to 91 percent of the time. That is higher than just chance, and far beyond human ability.

 

The Economist first broke the story on this study. As expected, it gained traction. Hyperbolic headlines littered tech blogs and magazines. And of course, when the dust settled, the “gaydar” scare wasn’t that straightforward. The “gaydar” algorithm was simple, the study was a draft posted online, and the results, though astounding, left a lot of room for both statistical and socio-political criticism. The researchers stated that their primary purpose in pursuing this inquiry was to “raise the alarm” about the dangers of facial recognition technology.

 

Facial recognition has become much more commonplace in recent years. Governments worldwide openly employ it for security purposes. Apple and Facebook both “recognize individuals in the videos you take” and the pictures you post online. Samsung allows smartphone users to unlock their device with a selfie. The Walt Disney Company, too, owns a huge database of facial recognition technology, which it uses (among other things) to determine how much you’ll laugh at movies. These current, commercial uses seem at worst benign and at best helpful. But the Stanford “gaydar” highlights the insidious, Orwellian nature of “function creep,” which policy makers need to keep an eye on.

 

Function creep “is the phenomenon by which a technology designed for a limited purpose may gain additional, unanticipated purposes or functions.” And it poses a major ethical problem for the use of facial recognition software. No doubt inspired developers will create new and enterprising means of analyzing people. No doubt most of these means will continue to be benign and commercial. But we must admit: classification based on appearance and/or affect is ripe for unintended consequences. The dystopian train of thought is easy to follow. It begs that we consider normative questions about facial recognition technology.

 

Who should be allowed to use facial recognition technologies? When are they allowed to use it? Under what conditions can users of facial technology store, share, and sell information?

 

The goal should be to keep facial recognition technology from doing harm. America has a disturbing dearth of regulation designed to protect citizens from ne’er-do-wells who have access to this technology. We should change that.

 

These normative questions can guide our future policy on the subject. At the very least, they should help us start thinking about cogent guidelines for the future use of facial recognition technology. The “gaydar” might not be cause for immediate alarm, but its implications are certainly worth a second thought. I’d recommend thinking on this sooner, rather than later.


Act Fast! Get Access to Your Genetic Past, Present, and Future for One Low, Low Price

Hannah Mosby, MJLST Staffer

 

It’s Saturday morning, and you’re flipping through channels on your TV when you hear the familiar vocal inflections of an infomercial. For three monthly installments of $19.99, you can get access to your complete genetic ancestry, and any genetic predispositions that might impact your health—both now and in the future. From the comfort of your couch, you can order a kit, provide a DNA sample, and poof. . . a month or two later, you know everything you could ever want to know about your own genetic makeup. Sounds a little far-fetched, right?

 

Wrong. It’s 2017, and genetic testing kits are not only readily accessible to the public—they’re relatively inexpensive. Curious about whether you’re really German and Irish? Wondering if you—like your mother and her grandmother—might develop Alzheimer’s disease? Companies like 23andMe have you covered. The company advertises kits that cover both ancestry and certain health risks, and has recorded the sale of over 2 million testing kits. Maybe you’ve heard your friend, your coworker, or your sister talking about these genetic tests—or maybe they’ve already ordered their own kit.

 

What they’re probably not talking about, however, is the host of bioethical implications this sort of at-home genetic testing has. To some, ancestry may be cocktail party conversation, but to others, heritage is an enormous component of their personal identity. Purchasing a genetic testing kit may mean suddenly finding out that your ancestry isn’t what you thought it was, and consumers may or may not understand the emotional and psychological implications of these kinds of results. Genetic health risks present an even bigger ethical challenge—it’s all too easy to mistake the word “predisposition” for a diagnosis. Unless consumers are thoroughly educated about the implications of specific gene variants, companies like 23andMe aren’t providing useful health data—they’re providing enormously impactful information that the average consumer may not be equipped to understand or cope with.

 

It’s also easy to forget about the data privacy concerns. According to 23andMe’s commercial website, “23andMe gives you control over your genetic information. We want you to decide how your information is used and with whom it is shared.” That sounds nice—but is that “meaningful choice” masked in legal-ese? Existing federal regulation bars discriminatory use of genetic information by insurance companies and employers, but how does that affect other entities, if it does at all? Third-party access to this highly personal information is under-regulated, and it can’t be adequately safeguarded by “consent” without thoroughly explaining to consumers the potential implications of third-party disclosure.

 

It’s easy to get wrapped up in new and exciting biotechnology—especially when it’s publicly accessible. And we should be excited. . . accessibility and transparency in a field as intimidating as genetics can be is worth celebrating. Further, genetic testing brings with it a host of preventative health and personal benefits. However, it also raises some ethical and regulatory concerns, and it’s important to make sure our enthusiasm—as consumers, but also as entrepreneurs—for genetic technology doesn’t outpace the regulatory systems available to govern it.


Apple Faces Trademark Lawsuit Regarding Its iPhone X Animoji Feature

Kaylee Kruschke, MJLST Staffer

 

The Japanese company, emonster k.k., sued Apple in the U.S. on Wednesday, Oct. 18, 2017, claiming that Apple infringed emoster’s Animoji trademark with the iPhone X Animoji feature.

 

But first, what even is an Animoji? According it a Time article, an Animoji is an animal emoji that you can control with your face, and send to your friends. You simply pick an emoji, point the camera at your face, and speak. The Animoji captures your facial expression and voice. However, this technology has yet to reach consumer hands. Apple’s website says that the iPhone X, with the Animoji feature, will be available for preorder on Oct. 27, and will be available for purchase Nov. 3.

 

So why is Apple being sued over this? Well, it’s not the actual technology that’s at issue. It’s the name Animoji. emonster’s complaint states that Enrique Bonansea created the Animoji app in 2014. This app allowed users to customize moving text and images and send them in messages. The United States Patent and Trademark Office registered Animoji to Bonansea on March 31, 2015, who later assigned the trademark to emoster in Aug. 2017, according to the complaint. Bonansea also claims that he received requests from companies, that he believes were fronts for Apple, to sell the trademark in Animoji. But these requests were denied, according to the complaint.

 

The complaint also provides more information that sheds light on the fact that Apple probably knew it was infringing emonster’s trademark in Animoji. The day before Apple announced its iPhone X and the Animoji feature, Apple filed a petition with the United States Patent and Trademark Office requesting that the office cancel the Animoji trademark because emonster, Inc. didn’t exist at the time of the application for the trademark. This was a simple mistake and the paperwork should have said emonster k.k. instead of emonster, Inc.; emonster was unable to fix the error because the cancellation proceeding was already pending. To be safe, emonster applied again for registration of the trademark Animoji in Sept. 2017, but this time under the correct name, emonster k.k..

 

Additionally, Apple knew about emonster’s app because it was available on the Apple App Store. Apple also helped emonster remove apps that infringed emonster’s trademark, the complaint stated. Nevertheless, Apple still went forward with using Animoji as the name for it’s new technology.

 

The complaint also alleges that emonster did send Apple a cease-and-desist letter, but Apple continued to use name Animoji for its new technology. emonster requests that Apple be enjoined from using the name Animoji, and claims that it is also entitled to recover Apple’s profits from using the name, any ascertainable damages emonster has, and the costs emonster incurs from the suit.

 

It’s unclear what this means for Apple and the release of the iPhone X, which is in the very near future. At this time, Apple has yet to comment on the lawsuit.


Sex Offenders on Social Media?!

Young Choo, MJLST Staffer

 

A sex offender’s access to social media is problematic nowadays on social media, especially considering the vast amount of dating apps you can use to meet other users. Crimes committed through the use of dating apps (such as Tinder and Grindr) include rape, child sex grooming and attempted murder. These statistics have increased seven-fold in just two years. Although sex offenders are required to register with the State, and individuals can get accesses to each state’s sex offender registry online, there are few laws and regulations designed to combat this specific situation in which minors or other young adults can become victims of sex crimes. A new dating app called “Gastby” was introduced to resolve this situation. When new users sign up for Gatsby, they’re put through a criminal background check, which includes sex offender registries.

Should sex-offenders even be allowed to get access to the social media? Recent Supreme Court case, Packingham v. North Carolina, decided that a North Carolina law preventing sex offenders getting access to a commercial social networking web site is unconstitutional due to the First Amendment’s Free Speech Clause. The Court emphasized the fact that accessing to the social media is vital for citizens in the exercise of First Amendment rights. The North Carolina law was struck down mainly because it wasn’t “narrowly tailored to serve a significant governmental interest,” but the Court noted that this decision does not prevent a State from enacting more specific laws to address and ban certain activity of sex offender on social media.

The new online dating app, Gatsby, cannot be the only solution to the current situation. There are already an estimated 50 million people using Tinder in the world and the users do not have a method of determining whether their matches may be sex offenders. New laws narrowly-tailored to address the situation, perhaps requiring dating apps to do background checks on users or an alternative method to prevent sex offenders from utilizing the dating app, might be necessary to reduce the increasing number of crimes through the dating apps.


Tribal Sovereign Immunity May Shield Pharmaceutical Patent Owner From PTAB Inter Partes Review

Brenden Hoffman, MJLST Staffer

 

The Eleventh Amendment to the United States Constitution provides for State Sovereign Immunity, stating: “The Judicial power of the United States shall not be construed to extend to any suit in law or equity, commenced or prosecuted against one of the United States by Citizens of another State, or by Citizens or Subjects of any Foreign State.”   Earlier this year, the Patent Trial and Appeals Board dismissed three Inter Partes Review proceedings against the University of Florida, based on their claim of State Sovereign Immunity. See Covidien LP v. University of Florida Research Foundation Inc., Case Nos. IPR 2016-01274; -01275, and -01276 (PTAB January 25, 2017).

Early last month, the pharmaceutical company Allergan announced that it had transferred its patent rights for the blockbuster drug Restasis to the Saint Regis Mohawk Tribe. Restasis is Allergan’s second most profitable drug (Botox is the first), netting $336.4 million in the second quarter of 2017.  Under this agreement, this tribe was paid $13.75 Million initially and will receive $15 Million in annual royalties for every year that the patents remain valid. Bob Bailey, Allergan’s Executive VP and Chief Legal Officer, indicated that they were approached by the St. Regis tribe and believe that tribal sovereign immunity should shield the patents from pending IPRs, stating “The Saint Regis Mohawk Tribe and its counsel approached Allergan with a sophisticated opportunity to strengthen the defense of our RESTASIS® intellectual property in the upcoming inter partes review proceedings before the Patent Trial and Appeal Board… Allergan evaluated this approach closely, with expert counsel in patent and sovereign immunity law. This included a thorough review of recent case law such as Covidien LP v. University of Florida Research Foundation Inc. and Neochord, Inc. v. University of Maryland, in which the PTAB dismissed IPR proceedings against the universities based upon their claims of sovereign immunity.”

IPRs are highly controversial.  The United States Supreme Court recently granted cert. in Oil States Energy Services, LLC v. Greene’s Energy Group, LLC  to determine “whether inter partes review, an adversarial process used by the Patent and Trademark Office (PTO) to analyze the validity of existing patents, violates the Constitution by extinguishing private property rights through a non-Article III forum without a jury.” Until this issue is resolved, IPRs will continue to be by companies such as Allergan seeking to protect their patent rights.  Over the past few years, hedge fund manager Kyle Bass made headlines as a “reverse troll,” by filing IPRs against pharmaceutical companies while simultaneously shorting their stocks. Bailey has stated that “the IPR process has been a thorn in our side…We get a call from reverse trolls on a regular basis. Now we have an alternative.” This move has been well regarded by many critical of IPRs, including an October 9, 2017 post on ipwatchdog.com titled “Native Americans Set to Save the Patent System.”  In addition, the St. Regis Mohawk tribe has indicated that these types of arrangements can help the tribe generate much-needed capital for housing, education, healthcare and welfare, without requiring the tribe to give up any land or money.

However, this arrangement between Allergan and the St. Regis Mohawk tribe has attracted strong criticism from others.  Mylan Pharmaceuticals, a party in the IPR proceedings challenging multiple Allergan patents on Restasis, has called this transfer a “sham” and made comparisons to racketeering cases with lending fraud.  “Allergan Pulls a Fast One” on the Science Translational Medicine Blog states, “‘The validity of your patents is subject to review, unless you pay off some Indian tribe’ does not seem like a good way to run an intellectual property system,” this is a “slimy legal trick,” and “this deal smells.” He suggests that “legal loopholes” like this sully the whole pharmaceutical industry look bad and that this will force Congress to take action.  

In fact, U.S. Senator Claire McCaskill, the top-ranking Democrat on the Homeland Security and Governmental Affairs Committee, has already written a letter to the Pharmaceutical Research and Manufacturers of America urging  them to review “whether the recent actions Allergan has taken are consistent with the mission of your organization.”  She believes that “This is one of the most brazen and absurd loopholes I’ve ever seen, and it should be illegal…PhRMA can and should play a role in telling its members that this action isn’t appropriate, and I hope they do that.”  On October 5, 2017, McCaskill introduced a bill to the Senate “To abrogate the sovereign immunity of Indian tribes as a defense in inter partes review of patents.”


Mechanical Curation: Spotify, Archillect, Algorithms, and AI

Jon Watkins, MJLST Staffer

 

A great deal of attention has been paid recently to artificial intelligence. This CGPGrey YouTube video is typical of much modern thought on artificial intelligence. The technology is incredibly exciting- until it threatens your job. This train of thought has led many, including the video above, to search for kinds of jobs which are unavoidably “human,” and thereby safe.

 

However, any feeling of safety that lends may be illusory. AI programs like Emily Howell, which composes sheet music, and Botnik, which writes jokes and articles, are widespread at this point. What these programs produce is increasingly indistinguishable from human-created content- not to mention increasingly innovative. Take, as another example, Harold Cohen’s comment on his AARON drawing program: “[AARON] generates objects that hold their own more than adequately, in human terms, in any gathering of similar, but human-produced, objects. . . It constitutes an existence proof of the power of machines to do some of the things we had assumed required thought. . . and creativity, and self-awareness.”

 

Thinking about what these machines create brings up more questions than answers. At what point is a program independent from its creator? Is any given “AI” actually creating works by itself, or is the author of the AI creating works through a proxy? The answer to these questions are enormously important, and any satisfying answer must have both legal and technical components.

 

To make the scope of these questions more manageable, let’s limit ourselves to one specific subset of creative work- a subset which is absolutely filled with “AI” at the moment- curation. Curation is the process of sorting through masses of art, music, or writing for the content that might be worth something to you. Curators have likely been around as long as humans have been collecting things, but up until recently they’ve been human. In the digital era, most people likely carry a dozen curators in their pocket. From Spotify and Pandora’s predictions of the music you might like, to Archillect’s AI mood board, to Facebook’s “People You May Know”, content curation is huge.

 

First, the legal issues. Curated collections are eligible for copyright protection, as long as they exhibit some “minimal degree of creativity.” Feist v. Rural Telephone Co., 499 U.S. 340, 345 (1991). However, as a recent monkey debacle clarified, only human authors are protected by copyright. This is implied by § 102 of the Copyright Act, which states in part that copyright protection subsists “in original works of authorship.” Works of authorship are created by authors, and authors are human. Therefore, at least legally, the author of the AI may be creating works through a proxy. However, as in the monkey case above, some courts may find there is no copyright-eligible author at all. If neither a monkey, nor a human who provides the monkey with creative tools is an author, is a human who provides a computer with creative tools an author? Goldstein v. California, a 1973 Supreme Court case, has been interpreted as standing for the proposition that computer-generated work must include “significant input from an author or user” to be copyright eligible. Does that decision need to be updated for a different era of computers?

 

The answer to this question is where a technical discussion may be helpful, because the answer may involve a simple spectrum of independence.

 

On one end of the spectrum is algorithmic curation which is deeply connected to decisions made by the algorithm’s programmer. If a programmer at Spotify writes a program which recommends I listen to certain songs, because those songs are written by artists I have a history of listening to, the end result (the recommendation) is only separated by two or three steps from the programmer. The programmer creates a rigid set of rules, which the computer implements. This seems to be no less a human work of authorship than a book written on a typewriter. Just as a programmer is separated from the end result by the program, a writer may be separated from the end result by various machinery within the typewriter. The wishes of both the programmer and the writer are carried out fairly directly, and the end results are undoubtedly human works of authorship.

 

More complex AI, however, is often more independent. Take for example Archillect, whose creator stated in an interview “It’s not reflecting my taste anymore . . .I’d say 60 percent of the things [she posts] are not things that I would like and share.” The process involved in Archillect, as described in the same interview, is much more complex than the simple Spotify program outlined above- “Deploying a network of bots that crawl Tumblr, Flickr, 500px, and other image-heavy sites, Archillect hunts for keywords and metadata that she likes, and posts the most promising results. . .  her whole method of curation is based on the relative popularity of her different posts.”

 

While its author undoubtedly influenced Archillect through various programming decisions (which sites to set up bots for, frequency of posts, broad themes), much of what Archillect does is what we would characterize as judgement calls if a human were doing the work. Deeply artistic questions like “does this fit into the theme I’m shooting for?” or “is this the type of content that will be well-received by my target audience?” are being asked and answered solely by Archillect, and are answered- as seen above- differently from how Archillect’s creator would answer them.

Even closer to the “independent” end of the spectrum, however, even more complex attempts at machine curation exist. This set of programs includes some of Google’s experiments, which attempt to make a better curator by employing cutting-edge machine learning technology. This attempt comes from the same company which recently used machine learning to create an AI which taught itself to walk with very little programmer interaction. If the same approaches to AI are shared between the experiments, Google’s attempts at creating a curation AI might result in software more independent (and possibly more worthy of the title of author) than any software yet.


Health in the Fast Lane: FDA’s Effort to Streamline Digital Health Technology Approval

Alex Eschenroeder, MJLST Staffer

 

The U.S. Food and Drug Administration (FDA) is testing out a fast-track approval program to see if it can accommodate the pace of innovation in the technology industry and encourage more ventures into the digital health technology space. Dr. Scott Gottlieb M.D., Commissioner of the FDA, announced the fast-track pilot program—officially named the “Pre-Cert for Software Pilot Program” (Program)—on July 27, 2017. Last week, the FDA announced the names of the nine companies it selected out of more than 100 applicants to participate in the Program. Companies that made it onto the participant list include tech giants such as Apple and Samsung, as well as Verily Life Sciences—a subsidiary of Alphabet, Inc. The FDA also listed smaller startups, indicating that it intends to learn from entities at various stages of development.

The FDA idea that attracted applicants from across the technology industry to the Program is roughly analogous to the TSA Pre-Check Program. With TSA Pre-Check certification, travelers at airports get exclusive access to less intensive pre-boarding security procedures because they submitted to an official background check (among other requirements) well before their trip. Here, the FDA Program completes extensive vetting of participating technology companies well before they bring a specific digital health technology product to market. As Dr. Gottlieb explained in the July Program announcement, “Our new, voluntary pilot program will enable us to develop a tailored approach toward this technology by looking first at the . . . developer, rather than primarily at the product (as we currently do for traditional medical products).” If the FDA determines through its review that a company meets necessary quality standards, it can pre-certify the company. A pre-certified company would then need to submit less information to the FDA “than is currently required before marketing a new digital health tool.” The FDA even proposed the possibility of a pre-certified company skipping pre-market review for certain products, as long as the company immediately started collecting post-market data for FDA to confirm safety and effectiveness.

While “digital health technology” does not have a simple definition, a recently announced Apple initiative illustrates what the term can mean and how the FDA Program could encourage its innovation. Specifically, Apple recently announced plans to undertake a Heart Study in collaboration with Stanford Medicine. Through this study, researchers will use “data from Apple Watch to identify irregular heart rhythms, including those from potentially serious heart conditions like atrial fibrillation.” Positive research results could encourage Apple, which “wants the Watch to be able to detect common heart conditions such as atrial fibrillation”, to move further into FDA regulated territory. Indeed, Apple has been working with the FDA, aside from the Program, to organize the Heart Study. This is a critical development, as Apple has intentionally limited Watch sensors to “fitness trackers and heart rate monitors” to avoid FDA regulation to date. If Apple receives pre-certification through the Program, it could issue updates to a sophisticated heart monitoring app or issue an entirely different diagnostic app with little or no FDA pre-market review. This dynamic would encourage Apple, and companies like it, to innovate in digital health technology and create increasingly sophisticated tools to protect consumer health.


Congress, Google Clash Over Sex-Trafficking Liability Law

Samuel Louwagie, MJLST Staffer

Should web companies be held liable when users engage in criminal sex trafficking on the platforms they provide? Members of both political parties in Congress are pushing to make the answer to that question yes, over the opposition of tech giants like Google.

The Communications Decency Act was enacted in 1934. In the early 1990s, as the Internet went live, Congress added Section 230 to the act. That provision protected providers of web platforms from civil liability for content posted by users of those platforms. The act states that in order to “promote the continued development of the internet . . . No provider of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That protection, according to the ACLU, “defines Internet culture as we know it.”  

Earlier this month, Congress debated an amendment to Section 230 called the Stop Enabling Sex Traffickers Act of 2017. The act would remove that protection from web platforms that knowingly allow sex trafficking to take place. The proposal comes after the First Circuit Court of Appeals held in March of 2016 that even though Backpage.com played a role in trafficking underage girls, section 230 protected it from liability. Sen. Rob Portman, a co-sponsor of the bill, wrote that it is Congress’ “responsibility to change this law” while “women and children have . . . their most basic rights stripped from them.” And even some tech companies, such as Oracle, have supported the bill.

Google, meanwhile, has resisted such emotional pleas. Its lobbyists have argued that Backpage.com could be criminally prosecuted, and that to remove core protections from internet companies will damage the free nature of the web. Critics, such as New York Times columnist Nicholas Kristof, argue the Stop Enabling Sex Traffickers Act was crafted “exceedingly narrowly to target those intentionally engaged in trafficking children.”

The bill has bipartisan support and appears to be gaining steam. The Internet Association, a trade group including Google and Facebook, expressed a willingness at a Congressional hearing to supporting “targeted amendments” to the Communications Decency Act. Whether Google likes it or not, eventually platforms will be at legal risk if they don’t police their content for sex trafficking.


In Doge We Trust

Richard Yo, MJLST Staffer

Despite the closure of virtually all U.S.-based Bitcoin exchanges in 2013 due to Congressional review and the uncertainty with which U.S. banks viewed its viability, the passion for cryptocurrencies has remained strong, especially among technologists and venture capitalists. This passion reached an all-time high in 2017 when one Bitcoin exchanged for 5000 USD.** Not more than five years ago, Bitcoin exchanged for 13 USD. For all its adoring supporters, however, cryptocurrencies have yet to gain traction in mainstream commerce for several reasons.

Cryptocurrencies, particularly Bitcoin, have been notoriously linked to dark web locales such as the now-defunct Silk Road. A current holder of Bitcoin, Litecoin, or Monero, would be hard pressed to find a completely legal way to spend his coins or tokens without second guessing himself. A few legitimate enterprises, such as Microsoft, will accept Bitcoin but only with very strict limitations, effectively scrubbing it of its fiat currency-like qualities.

The price of your token can take a volatile 50% downswing or 3000% upswing in a matter of days, if not hours. If you go to the store expecting to purchase twenty dollars’ worth of groceries, you want to be sure that the amount of groceries you had in mind at the beginning of your trip is approximately the amount of groceries you will be able to bring back home.

After the U.S. closures, cryptocurrency exchanges found havens in countries with strong technology bases. Hotbeds include China, Russia, Japan, and South Korea, among others. However, the global stage has recently added more uncertainty to the future of cryptocurrency. In March 2017, the Bank of Japan declared Bitcoin as an official form of payment. Senators in Australia are attempting to do the same. China and Russia, meanwhile, are home to most Bitcoin miners (Bitcoin is “mined” in the sense that transactions are verified by third-party computers, the owners of which are rewarded for their mining with Bitcoins of their own) due to low energy costs in those two nations and yet are highly suspicious of cryptocurrencies. China has recently banned the use of initial coin offerings (ICOs) to generate funds and South Korea has followed suit. Governments are unsure of how best to regulate, or desist from regulating, these exchanges and the companies that provide the token and coins. There’s also a legitimate question as to whether a cryptocurrency can be regulated given the nimbleness of the technology.

On this issue, some of the most popular exchanges are sometimes referred to as “regulated.” In truth, this is usually not in the way that consumers would think a bank or other financial institution is regulated. Instead, the cryptocurrency exchange usually imposes regulations on itself to ensure stability for its client base. It requires several forms of identification and multi-factor authentication that rivals (and sometimes exceeds) the security provided by traditional banks. These were corrections that were necessary after the epic 2014 failure of the then-largest cryptocurrency exchange in the world, Mt. Gox.

Such self-adjustments, self-regulation, and stringency are revealing. In the days of the Clinton administration when internet technology’s ascent was looming, the U.S. government adopted a framework for its regulation. That framework was unassuming and could possibly be pared to a single rule: we will regulate it when it needs regulating. It asked that this technology be left in the hands of those who understand it best and allow it to flourish.

This seems to be the approach that most national governments are taking. They seem to be imposing restrictions only when deemed necessary, not banning cryptocurrencies outright.

For Bitcoin and other cryptocurrencies, the analogous technology may be the “blockchain” that underlies their structure, not the tokens or coins themselves. The blockchain is a digital distributed ledger that provides anonymity, uniformity, and public (or private) access, using complex algorithms to verify and authenticate information. When someone excitedly speaks about the possibilities of Bitcoin or another cryptocurrency, they are often describing the features of blockchain technology, not the coin.

Blockchain technology has already proven itself in several fields of business and many others are hoping to utilize it to effectuate the efficient and reliable dissemination and integration of information. This could potentially have sweeping effects in areas such as medical record-keeping or title insurance. It’s too early to know and far too early to place restrictions. Ultimately, cryptocurrencies may be the canary that gets us to better things, not the pickaxe.

 

*Dogecoin is the cryptocurrency favored by the Shina Inu breed of dog, originally created as a practical joke, but having since retained its value and now used as a legitimate form of payment.

**The author holds, or has held, Bitcoin, Ether, Litecoin, Ripple, and Bitcoin Cash.