Internet

Congress, Google Clash Over Sex-Trafficking Liability Law

Samuel Louwagie, MJLST Staffer

Should web companies be held liable when users engage in criminal sex trafficking on the platforms they provide? Members of both political parties in Congress are pushing to make the answer to that question yes, over the opposition of tech giants like Google.

The Communications Decency Act was enacted in 1934. In the early 1990s, as the Internet went live, Congress added Section 230 to the act. That provision protected providers of web platforms from civil liability for content posted by users of those platforms. The act states that in order to “promote the continued development of the internet . . . No provider of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That protection, according to the ACLU, “defines Internet culture as we know it.”  

Earlier this month, Congress debated an amendment to Section 230 called the Stop Enabling Sex Traffickers Act of 2017. The act would remove that protection from web platforms that knowingly allow sex trafficking to take place. The proposal comes after the First Circuit Court of Appeals held in March of 2016 that even though Backpage.com played a role in trafficking underage girls, section 230 protected it from liability. Sen. Rob Portman, a co-sponsor of the bill, wrote that it is Congress’ “responsibility to change this law” while “women and children have . . . their most basic rights stripped from them.” And even some tech companies, such as Oracle, have supported the bill.

Google, meanwhile, has resisted such emotional pleas. Its lobbyists have argued that Backpage.com could be criminally prosecuted, and that to remove core protections from internet companies will damage the free nature of the web. Critics, such as New York Times columnist Nicholas Kristof, argue the Stop Enabling Sex Traffickers Act was crafted “exceedingly narrowly to target those intentionally engaged in trafficking children.”

The bill has bipartisan support and appears to be gaining steam. The Internet Association, a trade group including Google and Facebook, expressed a willingness at a Congressional hearing to supporting “targeted amendments” to the Communications Decency Act. Whether Google likes it or not, eventually platforms will be at legal risk if they don’t police their content for sex trafficking.


In Doge We Trust

Richard Yo, MJLST Staffer

Despite the closure of virtually all U.S.-based Bitcoin exchanges in 2013 due to Congressional review and the uncertainty with which U.S. banks viewed its viability, the passion for cryptocurrencies has remained strong, especially among technologists and venture capitalists. This passion reached an all-time high in 2017 when one Bitcoin exchanged for 5000 USD.** Not more than five years ago, Bitcoin exchanged for 13 USD. For all its adoring supporters, however, cryptocurrencies have yet to gain traction in mainstream commerce for several reasons.

Cryptocurrencies, particularly Bitcoin, have been notoriously linked to dark web locales such as the now-defunct Silk Road. A current holder of Bitcoin, Litecoin, or Monero, would be hard pressed to find a completely legal way to spend his coins or tokens without second guessing himself. A few legitimate enterprises, such as Microsoft, will accept Bitcoin but only with very strict limitations, effectively scrubbing it of its fiat currency-like qualities.

The price of your token can take a volatile 50% downswing or 3000% upswing in a matter of days, if not hours. If you go to the store expecting to purchase twenty dollars’ worth of groceries, you want to be sure that the amount of groceries you had in mind at the beginning of your trip is approximately the amount of groceries you will be able to bring back home.

After the U.S. closures, cryptocurrency exchanges found havens in countries with strong technology bases. Hotbeds include China, Russia, Japan, and South Korea, among others. However, the global stage has recently added more uncertainty to the future of cryptocurrency. In March 2017, the Bank of Japan declared Bitcoin as an official form of payment. Senators in Australia are attempting to do the same. China and Russia, meanwhile, are home to most Bitcoin miners (Bitcoin is “mined” in the sense that transactions are verified by third-party computers, the owners of which are rewarded for their mining with Bitcoins of their own) due to low energy costs in those two nations and yet are highly suspicious of cryptocurrencies. China has recently banned the use of initial coin offerings (ICOs) to generate funds and South Korea has followed suit. Governments are unsure of how best to regulate, or desist from regulating, these exchanges and the companies that provide the token and coins. There’s also a legitimate question as to whether a cryptocurrency can be regulated given the nimbleness of the technology.

On this issue, some of the most popular exchanges are sometimes referred to as “regulated.” In truth, this is usually not in the way that consumers would think a bank or other financial institution is regulated. Instead, the cryptocurrency exchange usually imposes regulations on itself to ensure stability for its client base. It requires several forms of identification and multi-factor authentication that rivals (and sometimes exceeds) the security provided by traditional banks. These were corrections that were necessary after the epic 2014 failure of the then-largest cryptocurrency exchange in the world, Mt. Gox.

Such self-adjustments, self-regulation, and stringency are revealing. In the days of the Clinton administration when internet technology’s ascent was looming, the U.S. government adopted a framework for its regulation. That framework was unassuming and could possibly be pared to a single rule: we will regulate it when it needs regulating. It asked that this technology be left in the hands of those who understand it best and allow it to flourish.

This seems to be the approach that most national governments are taking. They seem to be imposing restrictions only when deemed necessary, not banning cryptocurrencies outright.

For Bitcoin and other cryptocurrencies, the analogous technology may be the “blockchain” that underlies their structure, not the tokens or coins themselves. The blockchain is a digital distributed ledger that provides anonymity, uniformity, and public (or private) access, using complex algorithms to verify and authenticate information. When someone excitedly speaks about the possibilities of Bitcoin or another cryptocurrency, they are often describing the features of blockchain technology, not the coin.

Blockchain technology has already proven itself in several fields of business and many others are hoping to utilize it to effectuate the efficient and reliable dissemination and integration of information. This could potentially have sweeping effects in areas such as medical record-keeping or title insurance. It’s too early to know and far too early to place restrictions. Ultimately, cryptocurrencies may be the canary that gets us to better things, not the pickaxe.

 

*Dogecoin is the cryptocurrency favored by the Shina Inu breed of dog, originally created as a practical joke, but having since retained its value and now used as a legitimate form of payment.

**The author holds, or has held, Bitcoin, Ether, Litecoin, Ripple, and Bitcoin Cash.


Say Goodbye to Net Neutrality: Why FCC Protection of the Open Internet Is Over

Kristin McGaver, MJLST Guest Blogger

[Editor’s Note: Ms. McGaver’s blog topic serves as a nice preview for two articles being published in this Spring’s Issue 18.2, one on the FCC generally by researchers Brent Skorup and Joe Kane, and one on the Open Internet Order more specifically by MJLST Staffer Paul Gaus.]

Net neutrality is a complex issue at the forefront of many current online regulation debates. In these debates, it is often unclear what the concept of “net neutrality” actually entails, what parties and actors it affects, and how many different approaches to its regulation exist. Nevertheless, Ajit Pai—newly appointed chairman of the United States Federal Communications Commission (“FCC”)—thinks, “the issue is pretty simple.” Pai is openly opposed to net neutrality and has publicly expressed his intent not to enforce current FCC regulations pertaining to the issue with his recently acquired position of power. This is troubling to many net neutrality supporters. Open Internet advocates are rightfully concerned that Pai will hinder recent success for the advancement and protection of net neutrality achieved under former President Obama, resulting in the FCC’s 2015 “Protecting and Promoting the Open Internet” Regulation. With Pai at the FCC helm, net neutrality policy in the United States (“US”) is noticeably in flux. Thus, even though official policies protecting net neutrality exist on the books, the circumstances surrounding their enforcement and longevity leave much gray area to be explored, chiseled out, and set into stone.

Net neutrality is the idea that all Internet traffic should be treated equally. Yet, since 2003 when Tim Wu coined the term, scholars and commentators cannot agree on a standard definition since that very definition is at the base of a multi-layered over-arching debate. In the US, the most recent FCC articulation of net neutrality is defined by three principles—“no blocking, no throttling and no paid prioriti[z]ation.” These principles mean that ISPs should not be allowed to charge companies or websites higher rates for speedier connections or charge the user higher amounts for specific services. The new “bright-line” rules forbid ISPs from restricting access, tampering with Internet traffic, or favoring certain kinds of traffic via the use of “fast lanes.” Markedly, one thing the 2015 Regulation did not completely forbid is “zero-rating” or “the practice of allowing customers to consume content from certain platforms without it counting towards their data plan cap”—a practice many see as violating net neutrality. Even with this and other exceptions, the 2015 Regulation’s passing was not met without resistance: Republican Senator Ted Cruz from Texas tweeted that the 2015 Regulation was “Obamacare for the Internet.”

Additionally, net neutrality supporters and the FCC majority did not have long to bask in their success after the 2015 Regulation’s approval. The United States Telecom Association and Alamo Broadband quickly challenged it in a lawsuit. Because the new regulation re-classified ISPs as common carriers and therefore subject to the FCC’s authority, Telecom claimed that the FCC was overreaching, harming businesses, and impeding innovation in the field. Fortunately for the FCC, the United States Court of Appeals for the District of Columbia upheld the 2015 Regulation in a 2–to–1 decision.

Yet, the waves are far from settling for the FCC and net neutrality supporters in the US. Following the D.C. Circuit’s 2016 decision, American company AT&T and other members of the cable and telecom industry signaled an intent to continue the challenge, potentially all the way to the Supreme Court. More importantly, the lead dissenter to the 2015 Regulation is now chairman of the FCC. In his first few months as Chairman, Ajit Pai declined to comment on whether the FCC plans to enforce the 2015 Regulation. Pai’s “no comment” does not look promising for net neutrality or for those hoping the US will maintain its intent to protect the open Internet as was articulated in the 2015 Regulation.

Although the 2015 Regulation remains on the books, the likelihood that it is carefully enforced, or really enforced at all, is pretty low. This leaves a total lack of accountability for breaching ISPs. Achieving a policy that is not entirely spineless is admittedly complicated in the context of an Internet that is constantly evolving and a market that is increasingly dominated by just a few ISPs. But, effective policies are not impossible, as evidenced by success in the European Union and several of their member states in setting policies that protect and promote net neutrality. It is clear from these examples that effective net neutrality regulation in the online context requires setting, maintaining, and enforcing official articulations of policy. However, with a clear signal from the FCC chairman to back away from the enforcement of a set policy, it will be as if no regulation exists at all.


Why Equity-Based Crowdfunding Is Not Flourishing? — A Comparison Between the US and the UK

Tianxiang Zhou, MJLST Editor

While donation-based crowdfunding (giving money to enterprises or organizations they want to support) is flourishing on online platforms in the US, the equity-based crowdfunding (funding startup enterprises or organizations in return for equity) under the JOBS Act is still staggering as the requirements are proving impractical for most entrepreneurs.

Donation-based crowdfunding is dominating the major crowdfunding websites like Indiegogo, Kickstarter, etc. In March, 2017, Facebook announced that it will introduce a crowdfunding feature that will help users back causes such as education, medical needs, pet medical, crisis relief, personal emergencies and funerals. However, this new crowdfunding feature from Facebook has nothing to do with equity-based crowdfunding; it is only used for donation-based crowdfunding. As for the platforms specialized in crowdfunding,  equity-based crowdfunding projects are difficult to find. If you visit Kickstarter or Indiegogo, most of the crowdfunding projects that appear on the webpages are donation-based crowdfunding project. As of April 2, 2017, there are only four active crowdfunding opportunities appearing on the Indiegogo website that are available for investors. The website stated that “more than 200 (equity-based) projects funded in the past.” (The writer cannot find an equity-based crowdfunding opportunity on Kickstarter or a section to search equity-based crowdfunding opportunities.)

The reason why equity-based crowdfunding is not flourishing is easily apparent. As one article points out, the statutory requirements for Crowdfunding under the JOBS Act “effectively weigh it down to the point of making the crowdfunding exemption utterly useless.” The problems associated with obtaining funding for small businesses that the JOBS Act aims to resolve are still there with crowdfunding: for example, the crowdfunding must be done through a registered broker-dealer and the issuer have to file various disclosure statement including financial statement and annual reports. For smaller businesses, the costs to prepare such reports could be heavily burdensome for the business at their early stage.

Compared to crowdfunding requirements in the US, the UK rules are much easier for issuers to comply with. Financial Conduct Authority (FCA) introduced a set of regulations for the peer-to-peer sector in 2014. Before this, the P2P sector did not fall under any regulatory regime. After 2014, the UK government requires platforms to be licensed or to have regulated activities managed by authorized parties. If an investor is deemed a “non-sophisticated” investor constraints are placed on how much they are permitted to invest, in that they must not invest more than 10% of their net investable assets in investments sold via what are called investment-based crowdfunding platforms. Though the rules require communication of the offers and the language and clarity of description used to describe these offers and the awareness of the risk associated with them, much fewer disclosure obligations are required for the issuers such as the filing requirements of annual reports and financial statement.

As a result, the crowdfunding market in the UK is characterized as “less by exchanges that resemble charity, gift giving, and retail, and more by those of financial market exchange” compared with the US. On the UK-based crowdfunding website Crowdcude, there are 14 opening opportunities for investors as of April 2, 17, and there were 494 projects funded. In comparison, the US-based crowdfunding giant Indiegogo’s statement that “more than 200 projects funded in the past” is not very impressive considering the difference between the sizes of the UK’s economy and the US’ economy.

While entrepreneurs in the US are facing many obstacles in funding through equity-based crowdfunding, the UK crowdfunding websites are now providing more equity-based opportunities to the investors, and sometimes even more effective than government-lead programs. The Crowd Data Center publicized a report stating that seed crowdfunding in the UK is more effective in delivering 40% more funding in 2016 than the UK government funded Startup Loans scheme.

As for the concern that the equity-based fraud funding involves too much risk for “unsophisticated investors,” articles pointed out that in countries like UK and Australia where lightly regulated equity crowdfunding platforms welcomed all investors, there is “hardly any instances of fraud.” While the equity-crowdfunding JOBS Act has not failed to prove its efficiency, state laws are devising more options for the issuers with restrictions of SEC Rule 147. (see more from 1000 Days Late & $1 Million Short: The Rise and Rise of Intrastate Equity Crowdfunding). At the same time, the FCA stated that it will also revisit the rules on crowdfunding. It would be interesting to see how the crowdfunding rules will evolve in the future.


Should You Worry That ISPs Can Sell Your Browsing Data?

Joshua Wold, Article Editor

Congress recently voted to overturn the FCC’s October 2016 rules Protecting the Privacy of Customers of Broadband and Other Telecommunications Services through the Congressional Review Act. As a result, those rules will likely never go into effect. Had the rules been implemented, they would have required Internet Service Providers (ISPs) to get customer permission before making certain uses of customer data.

Some commentators, looking the scope of the rules relative to the internet ecosystem as a whole, and the fact that the rules hadn’t yet taken effect, thought that this probably wouldn’t have a huge impact on privacy. Orin Kerr suggested that the overruling of the privacy regulations was unlikely to change what ISPs would do with data, because other laws constrain them. Others, however, were less sanguine. The Verge quoted Jeff Chester of the Center for Digital Democracy as saying “For the foreseeable future, we’re going to be living in a commercial surveillance state.”

While the specific context of these privacy regulations is new (the FCC couldn’t regulate ISPs until 2015, when it defined them as telecommunications providers instead of information services), debates over privacy are not. In 2013, MJLST published Adam Thierer’s Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle. In it, the author argues that privacy threats (as well as many other threats from technological advancement) are generally exaggerated. Thierer then lays out a four-part analytic framework for weighing regulation, calling on regulators and politicians to identify clear harms, engage in cost-benefit analysis, consider more permissive regulation, and then evaluate and measure the outcomes of their choices.

Given Minnesota’s response to Congress’s action, the debate over privacy and regulation of ISPs is unlikely to end soon. Other states may consider similar restrictions, or future political changes could lead to a swing back toward regulation. Or, the current movement toward less privacy regulation could continue. In any event, Thierer’s piece, and particularly his framework, may be useful to those wishing the evaluate regulatory policy as ISP regulation progresses.

For a different perspective on ISP regulation, see Paul Gaus’s student note, upcoming in Volume 19, Issue 1. That article will focus on presenting several arguments in favor of regulating ISPs’ privacy practices, and will be a thoughtful contribution to the discussion about privacy in today’s internet.


Confusion Continues After Spokeo

Paul Gaus, MJLST Staffer

Many observers hoped the Supreme Court’s decision in Spokeo v. Robins would bring clarity to whether plaintiffs could establish Article III standing for claims based on future harm from date breaches. John Biglow explored the issue prior to the Supreme Court’s decision in his note It Stands to Reason: An Argument for Article III Standing Based on the Threat of Future Harm in Date Breach Litigation. For those optimistic the Supreme Court would expand access to individuals seeking to litigate their privacy interests, they were disappointed.

Spokeo is a people search engine that generates publicly accessible online profiles on individuals (they had also been the subject of previous FTC data privacy enforcement actions). The plaintiff claimed Spokeo disseminated a false report on him, hampering his ability to find employment. Although the Ninth Circuit held the plaintiff suffered “concrete” and “particularized” harm, the Supreme Court disagreed, claiming the Ninth Circuit analysis applied only to the particularization requirement. The Supreme Court remanded the matter back to the Ninth Circuit, casting doubt on whether the plaintiff suffered concrete harm. Spokeo violated the Fair Credit Reporting Act, but the Supreme Court characterized the false report as a bare procedural harm, insufficient for Article III standing.

Already, the Circuits are split on how Spokeo impacted consumer data protection lawsuits. The Eighth Circuit held that a cable company’s failure to destroy personally identifiable information of a former customer was a bare procedural harm akin to Spokeo in Braitberg v. Charter Communications. The Eighth Circuit reached this conclusion despite the defendant’s clear violation of the Cable Act. By contrast, the Eleventh Circuit held a plaintiff did have standing when she failed to receive disclosures of her default debt from her creditor under the Fair Debt Collections Practices Act in Church v. Accretive Health.

Many observers consider Spokeo an adverse result for consumers seeking to litigate their privacy interests. The Supreme Court punting on the issue continued the divergent application of Article III standing and class action privacy suits among the Circuits.


Digital Tracking: Same Concept, Different Era

Meibo Chen, MJLST Staffer

The term “paper trail” continues to become more anachronistic in today’s world as time goes on.  While there are some people who still prefer the traditional old-fashioned pen and paper, our modern world has endowed us with technologies like computers and smartphones.  Whether we like it or not, this digital explosion is slowly consuming and taking over the lives of the average American (73% of US adults own a desktop or laptop computer, and 68% own a smartphone).

These new technologies have forced us to re-consider many novel legal issues that arose from their integration into our daily lives.  Recent Supreme Court decisions such as Riley v. California in 2014 pointed out the immense data storage capacity of a modern cell phone, and requires a warrant for its search in the context of a criminal prosecution.  In the civil context, many consumers are concerned with internet tracking.  Indeed, the MJLST published an article in 2012 addressing this issue.

We have grown so accustomed to seeing “suggestions” that eerily match our respective interests.  In fact, internet tracking technology has become far more sophisticated than the traditional cookies, and can now utilizes “fingerprinting” technology to look at battery status or window size to identify a user’s presence or interest. This leads many to fear for their data privacy in similar digital settings.  However, isn’t this digital tracking just the modern adaptation to “physical” tracking that we have grown so accustomed to?

When we physically go to a grocery store, don’t we subject ourselves to the prying eyes of those around us?  Why should it be any different in a cyberspace context?  While seemingly scary accurate at times, “suggestions” or “recommended pages” based on one’s browsing history can actually be beneficial to both the tracked and the tracker.  The tracked gets more personalized search results while the tracker uses that information for better business results between him and the consumer.  Many browsers already sport the “incognito” function to disable the tracks, bring a balance to when consumers want their privacy.  Of course, this tracking technology can be misused, but malicious use of a beneficial technology has always been there in our world.


Faux News vs. Freedom of Speech?

Tyler Hartney, MJLST Staffer

This election season has produced a lot of jokes on social media. Some of the jokes are funny and other jokes lack an obvious punch line. Multiple outlets are now reporting that this fake news may’ve influenced voters in the 2016 presidential election. Both Facebook and Google have made conscious efforts to reduce the appearance of these fake news stories on their sites in attempt to reduce the click bait, and thus the revenue streams, of these faux news outlets. With the expansion of the use of technology and social media, these types of stories become of a relevant circulation to possibly warrant misinformation being spread on a massive level. Is this like screaming “fire” in a crowded theatre? How biased would filtering this speech become? Facebook was blown to shreds by the media when it was found to have suppressed conservative news outlets, but as a private business it had every right to do so. Experts are now saying that the Russian government made efforts to help spread this fake news to help Donald Trump win the presidency.

First, the only entity that cannot place limits on speech is the state. If Facebook or Google chose to filter the news broadcasted on each site, users still do not have a claim against the entity; this would be a considered a private business choice. These faux news outlets circulate stories that have appeared to be, at times, intentionally and willfully misleading. Is this similar to a man shouting “fire” in a crowded theatre? In essence, the man in the aforementioned commonly used hypothetical knows that his statement is false and that it has a high probability of inciting panic, but the general public will not be aware of the validity of his statement and will have no time to check. The second part of that statement is key. The general public would not hypothetically have time to check the validity of the statement. If government were to begin passing regulations and cracking down on the circulation and creation of these hoax news stories, it would have to prove that these stories create a “clear and present danger” that will bring significant troubles that Congress has the right to protect the public from. This standard was created in the Supreme Court’s decision in Schenck v. United States. The government will not likely be capable of banning these types of faux news stories because, while some may consider these stories dangerous, the audience has the capability of validating the content from these untrusted sources.

Even contemplating government action under this circumstance would require the state to walk a fine line with freedom of political expression. What is humorous and what is dangerously misleading? For example, The Onion posted an article entitled “Biden Forges Presidents Signature Executive Order 54723,” clearly this is a joke; however, it holds the potential ability to insight fury from those who might believe it and create a misinformed public that might use this as material information when casting a ballot. This Onion article is not notably different from another post entitled “FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE” published by the Denver Guardian. With the same potential to mislead the public, there wouldn’t really be any identifiable differences between the two stories. This area of gray would make it extremely difficult to methodically stop the production of fake news while ensuring the protection of the comedic parody news. The only way to protect the public from the dangers of these stories that are apparently being pushed on to the American voting public by the Russian government in an attempt to influence election outcomes is to educate the public on how to verify online accounts.


The Best Process for the Best Evidence

Mary Riverso, MJLST Staffer

Social networking sites are now an integral part of American society. Almost everyone and everything has a profile, typically on multiple platforms. And people like to use them. Companies like having direct contact with their customers, media outlets like having access to viewer opinions, and people like to document their personal lives.

However, as the use of social-networking continues to increase in scope, the information placed in the public sphere is playing an increasingly centralized role in investigations and litigation. Many police departments conduct regular surveillance of public social media posts in their communities because these sites have become conduits for crimes and other wrongful behavior. As a result, litigants increasingly seek to offer records of statements made on social media sites as evidence. So how exactly can content from social media be used as evidence? Ira Robbins explores this issue in her article Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-Networking Evidence. The main hurdle is one of reliability. In order to be admitted as evidence, the source of information must be authentic so that a fact-finder may rely on the source and ultimately its content as trustworthy and accurate. However, social media sites are particularly susceptible to forgery, hacking, and alterations. Without a confession, it is often difficult to determine who is the actual author responsible for posting the content.

Courts grapple with this issue – some allow social media evidence only when the record establishes distinctive characteristics of the particular website under Federal Rule of Evidence 901(b)(4), other courts believe authentication is a relatively low bar and as long as the witness testifies to the process by which the record was obtained, then it is ultimately for the jury to determine the credibility of the content. But is that fair? If evidence is supposed to assist the fact-finder in “ascertaining the truth and securing a just determination,” should it not be of utmost importance to determine the author of the content? Is not a main purpose of authentication to attribute the content to the proper author? Social media records may well be the best evidence against a defendant, but without an authorship-centric approach, the current path to their admissibility may not yet be the best process.


Are News Aggregators Getting Their Fair Share of Fair Use?

Mickey Stevens, MJLST Note & Comment Editor

Fair use is an affirmative defense to copyright that permits the use of copyrighted materials without the author’s permission when doing so fulfills copyright’s goal of promoting the progress of science and useful arts. One factor that courts analyze to determine whether or not fair use applies is whether the use is of a commercial nature or if it is for nonprofit educational purposes—in other words, whether the use is “transformative.” Recently, courts have had to determine whether automatic news aggregators can invoke the fair use defense against claims of copyright infringement. An automatic news aggregator scrapes the Internet and republishes pieces of the original source without adding commentary to the original works.

In Spring 2014, MJLST published “Associated Press v. Meltwater: Are Courts Being Fair to News Aggregators?” by Dylan J. Quinn. That article discussed the Meltwater case, in which the United States District Court for the Southern District of New York held that Meltwater—an automatic news aggregator—could not invoke the defense of fair use because its use of copyrighted works was not “transformative.” Meltwater argued that it should be treated like search engines, whose actions do constitute fair use. The court rejected this argument, stating that Meltwater customers were using the news aggregator as a substitute for the original work, instead of clicking through to the original article like a search engine.

In his article, Quinn argued that the Meltwater court’s interpretation of “transformative” was too narrow, and that such an interpretation made an untenable distinction between search engines and automatic news aggregators who function similarly. Quinn asked, “[W]hat if a news aggregator can show that its commercial consumers only use the snippets for monitoring how frequently it is mentioned in the media and by whom? Is that not a different ‘use’?” Well, the recent case of Fox News Network, LLC v. TVEyes, Inc. presented a dispute similar to Quinn’s hypothetical that might indicate support for his argument.

In TVEyes, Fox News claimed that TVEyes, a media-monitoring service that aggregated news reports into a searchable database, had infringed copyrighted clips of Fox News programs. The TVEyes database allowed subscribers to track when, where, and how words of interest are used in the media—the type of monitoring that Quinn argued should constitute a “transformative” use. In a 2014 ruling, the court held that TVEyes’ search engine that displayed clips was transformative because it converted the original work into a research tool by enabling subscribers to research, criticize, and comment. 43 F. Supp. 3d 379 (S.D.N.Y. 2014). In a 2015 decision, the court analyzed a few specific features of the TVEyes service, including an archiving function and a date-time search function. 2015 WL 5025274 (S.D.N.Y. Aug. 25, 2015). The court held that the archiving feature constituted fair use because it allowed subscribers to detect patterns and trends and save clips for later research and commentary. However, the court held that the date-time search function (allowing users to search for video clips by date and time of airing) was not fair use. The court reasoned that users who have date and time information could easily obtain that clip from the copyright holder or licensing agents (e.g. by buying a DVD).

While the court’s decision did point out that the video clip database was different in kind from that of a collection of print news articles, the TVEyes decisions show that the court may now be willing to allow automatic news aggregators to invoke the fair use defense when they can show that their collection of print news articles enables consumers to track patterns and trends in print news articles for research, criticism, and commentary. Thus, the TVEyes decisions may lead the court to reconsider the distinction between search engines and automatic news aggregators established in Meltwater that puts news aggregators at a disadvantage when it comes to fair use.