Internet

Counterfeit Products: A Growing Issue for Online Retailers

Caleb Holtz, MJLST Staffer

Two years ago, my girlfriend gave me an Amazon receipt showing she had ordered a really cool jersey from the German national soccer team. I was very excited. Not only had my girlfriend purchased for me a great jersey, but she had also found a reputable, accessible retailer for buying soccer jerseys in the United States. My excitement soon faded however. The jersey was delayed and delayed, and eventually Amazon cancelled the order and issued a refund. It turned out the jersey was sold by a popular counterfeiter hosting products on Amazon’s site through Amazon’s popular “Fulfilled by Amazon” program. Unaware that this existed prior to this experience, my girlfriend had been lured into a false sense of security that she was purchasing something from the world’s largest retailer, rather than from an obscure counterfeiter.

As it turns out, we were far from the only consumers to fall prey to counterfeit goods being sold on Amazon. Per a recent Engadget article discussing the issue, “the Counterfeit Report, an advocacy group that works with brands to detect fake goods, has found around 58,000 counterfeit products on Amazon since May 2016.” Amazon, recognizing that customers are more likely to trust counterfeits sold on its website, set a goal in 2017 to fight counterfeits.

Amazon is hardly the only retailer dealing with counterfeiting issues. The International Trademark Association said that trade in pirated and counterfeited intellectual property accounted for $461 billion in 2013. The Chinese retail giant Alibaba was at one point put on a U.S. anti-counterfeiting blacklist because of the large quantities of counterfeit goods sold on its website. Ebay, Walmart, Sears, and Newegg have also faced allegations of hosting counterfeited products. Importantly, however, for each of the retailers, there are few legal repercussions for merely hosting counterfeit goods. With the exception of a 2008 case against eBay, the aforementioned retailers have largely avoided being found liable for the counterfeit products they aided in selling.

Amazon provides the best blueprint for avoiding liability. Amazon has avoided liability by arguing that while it may host sellers, it is not a seller itself. Fortunately for Amazon, the Federal Circuit agrees that it is not a seller of the counterfeit goods, and therefore cannot be liable for copyright infringement, even if Amazon stored and shipped the products from its own warehouses. Milo & Gabby LLC v. Amazon.com, Inc., 693 Fed.Appx. 879 (Fed. Cir. 2017). As it is merely a marketplace, Amazon can continue avoiding liability so long as it appropriately responds to complaints of intellectual property infringement.

It will be interesting to see how the parties involved handle this counterfeiting issue going forward, especially as the government anticipates counterfeiting business to continue to grow. Online retailers are taking proactive steps to limit the sale of counterfeits on their websites, although those have been far from effective. Some have suggested artificial intelligence holds the key to solving this problem. Wronged intellectual property owners have not given up on forcing a remedy through the judicial system, as can be seen by the lawsuit filed by Daimler against Amazon recently. Finally, some, such as the judge in the Milo & Gabby case, see a legislative approach such as closing the marketplace loophole that allows online retailers to skate by relatively untouched. Unfortunately for consumers, it does not appear like there is an imminent solution to this problem, so it is best to be aware of how to avoid accidentally purchasing a counterfeit.


Controversial Anti-Sex Trafficking Bill Eliminates Safe-Harbor for Tech Companies

Maya Digre, MJLST Staffer

 

Last week the U.S. Senate voted to approve the Stop Enabling Sex Traffickers Act. The U.S. House of Representatives also passed a similar bill earlier this year. The bill creates an exception to Section 230 of the Communications Decency Act that allows victims of sex trafficking to sue websites that enabled their abuse. The bill was overwhelmingly approved in both the U.S. House and Senate, receiving 388-25 and 97-2 votes respectively. President Trump has indicated that he is likely to sign the bill.

 

Section 230 of the Communications Decency Act shields websites from liability stemming from content posted by third parties on their sites. Many tech companies argue that this provision has allowed them to become successful without a constant threat of liability. However, websites like Facebook, Google, and Twitter have recently received criticism for the role they played in unintentionally meddling in the 2016 presidential election. Seemingly the “hands off” approach of many websites has become a problem that Congress now seeks to address, at least with respect to sex trafficking.

 

The proposed exception would expose websites to liability if they “knowingly” assist, support, or facilitate sex trafficking. The bill seeks to make websites more accountable for posts on their site, discouraging a “hands off” approach.

 

While the proposed legislation has received bipartisan support from congress, it has been quite controversial in many communities. Tech companies, free-speech advocates, and consensual sex workers all argue that the bill will have unintended adverse consequences. The tech companies and free-speech advocates argue that the bill will stifle speech on the internet, and force smaller tech companies out of business for fear of liability. Consensual sex workers argue that this bill will shut down their online presence, forcing them to engage in high-risk street work. Other debates center on how the “knowingly” standard will affect how websites are run. Critics argue that, in response to this standard, “[s]ites will either censor more content to lower risk of knowing about sex trafficking, or they will dial down moderation in an effort not to know.” At least one website has altered their behavior in the wake of this bill. In response to this legislation Craigslist has remove the “personal ad” platform from their website.

 


The Unfair Advantage of Web Television

Richard Yo, MJLST Staffer

 

Up to a certain point, ISPs like Comcast, Verizon, and AT&T enjoy healthy, mutually beneficial relationships with web content companies such as Netflix, YouTube, and Amazon. That relationship remains so even when regular internet usage moves beyond emails and webpage browsing to VoIP and video streaming. To consume data-heavy content, users seek the wider bandwidth of broadband service and ISPs are more than happy to provide it at a premium. However, once one side enters the foray of the other, the relationship becomes less tenable unless it is restructured or improved upon. This problem is worse when both sides attempt to mimic the other.

 

Such a tension had clearly arisen by the time Verizon v. FCC 740 F.3d 623 (D.C. Cir. 2014) was decided. The D.C. Circuit vacated, or rather clarified, the applicability of two of the three rules that constituted the FCC’s 2010 Open Internet Order. The D.C. Circuit clarified that the rule of transparency was applicable to all, but the restrictions on blocking and discrimination were applicable only to common carriers. The FCC had previously classified ISPs under Title I of the Communications Act; common carriers are classified under Title II. The 2014 decision confirmed that broadband companies, not being common carriers, could choose the internet speed of websites and web-services at their discretion so long as they were transparent. So, to say that the internet’s astounding growth and development is due to light touch regulation is disingenuous. That statement in and of itself is true. Such discriminatory and blocking behavior was not in the purview of broadband providers during the early days of the internet due to the aforementioned relationship.

 

Once web content began taking on the familiar forms of broadcast television, signs of throttling were evident. Netflix began original programming in 2013 and saw its streaming speeds drop dramatically that year on both Verizon and Comcast networks. In 2014, Netflix made separate peering-interconnection agreements with both companies to secure reliably fast speeds for itself. Soon, public outcry led to the FCC’s 2015 Open Internet Order reclassifying broadband internet service as a “telecommunications service” subject to Title II. ISPs were now common carriers and net neutrality was in play, at least briefly (2015-2018).

 

Due to the FCC’s 2018 Restoring Internet Freedom Order, much of the features of the 2015 order have been reversed. Some now fear that ISPs will again attempt to control the traffic on their networks in all sorts of insidious ways. This is a legitimate concern but not one that necessarily spans the entire spectrum of the internet.

 

The internet has largely gone unregulated thanks to legislation and policies meant to encourage innovation and discourse. Under this incubatory setting, numerous such advancements and developments have indeed been made. One quasi-advancement is the streaming of voice and video. The internet has gone from cat videos to award-winning dramas. What began as a supplement to mainstream entertainment has now become the dominant force. Instead of Holly Hunter rushing across a busy TV station, we have Philip DeFranco booting up his iMac. Our tastes have changed, and with it, the production involved.

 

There is an imbalance here. Broadcast television has always suffered the misgivings of the FCC, even more than its cable brethren. The pragmatic reason for this has always been broadcast television’s availability, or rather its unavoidability. Censors saw to it that obscenities would never come across a child’s view, even inadvertently. But it cannot be denied that the internet is vastly more ubiquitous. Laptop, tablet, and smartphone sales outnumber those of televisions. Even TVs are now ‘smart,’ serving not only their first master but a second web master as well (no pun intended). Shows like Community and Arrested Development were network television shows (on NBC and FOX, respectively) one minute, and web content (on Yahoo! and Netflix, respectively) the next. The form and function of these programs had not substantially changed but they were suddenly free of the FCC’s reign. Virtually identical productions on different platforms are regulated differently, all due to arguments anchored by fears of stagnation.


New Data Protection Regulation in European Union Could Have Global Ramifications

Kevin Cunningham, MJLST Staffer

 

For as long as the commercial web has existed, companies have monetized personal information by mining data. On May 25, however, individuals in the 28 member countries of the European Union will have the ability to opt into the data collection used by so many data companies. The General Data Protection Regulation (GDPR), agreed upon by the European Parliament and Council in April 2016, will replace Data Protection Directive 95/46/ec as the primary law regulating how companies protect personal data of individuals in the European Union. The requirements of the new GDPR aim to create more consistent protection of consumer and personal data across the European Union.

 

Publishers, banks, universities, data and technology companies, ad-tech companies, devices, and applications operating in the European Union will have to comply with the privacy and data protection requirements of the GDPR or be subject to heavy fines (up to four (4) percent of annual global revenue) and penalties. Some of the requirements include: requiring consent of subjects for data processing; anonymizing collected data to protect privacy; providing data breach notifications within 72 hours of the occurrence; safely handling the transfer of data across borders; requiring certain companies to appoint a data protection officer to oversee compliance of the Regulation. Likewise, the European Commission posted on its website that a social network platform will have to adhere to user requests to delete photos and inform search engines and other websites that used the photos that the images should be removed. This baseline set of standards for companies handling data in the EU will better protect the processing and movement of personal data.

 

Companies will have to be clear and concise about the collection and use of personally identifiable information such as name, home address, data location, or IP address. Consumers will have the right to access data that companies store about the individuals, as well as the right to correct false or inaccurate information. Moreover, the GDPR imposes stricter conditions applying to the collection of ‘sensitive data’ such as race, political affiliation, sexual orientation, and religion. The GDPR will still allow businesses to process personally identifiable information without consumer consent for legitimate business interests which include direct marketing through mail, email, or online ads. Still, companies will have to account

 

The change to European law could have global ramifications. Any company that markets goods or service to EU residents will be subject to the GDPR. Many of the giant tech companies that collect data, such as Google and Facebook, look to keep uniform systems and have either revamped or announced a change to privacy settings to be more user-friendly.


University of Minnesota Partners With Michigan State University to Launch SCOTUS Notes

Brandy Hough, MJLST Staffer

 

If you thought your elementary school’s grueling cursive curriculum was all for naught, you’re sadly mistaken. The University of Minnesota, in partnership with Michigan State University, launched a crowdsourcing project this month to transcribe Supreme Court justices’ handwritten conference notes. The project, dubbed SCOTUS Notes, engages “citizen archivists” to decipher and transcribe the justices’ notes, with the goal of making them broadly accessible in an electronic and legible format. If you can spot a cursive Z from a mile away, you might just help transcribe history.

 

Researchers at the two universities received a three-year federal grant from the National Science Foundation to fund the project, which is hosted on Zooniverse, a large-scale platform for “people-powered research.” The researchers hope that crowdsourcing the work will lead to more efficient and accurate results than could be achieved by staff researchers alone. Project co-director Tim Johnson explains that ten people independently transcribe each page, which allows researchers “to obtain high level agreement scores for every word transcribed—even when the words are really difficult to determine.” At the time of writing, 876 volunteers have registered since the project’s February 13 launch date. You can monitor the project’s progress in real time on the SCOTUS Notes Zooniverse page.

 

In its current phase, SCOTUS Notes aims to harness its volunteer power to transcribe 12,600 pages of conference notes taken by Justices Harry A. Blackmun and William J. Brennan related to cases decided between 1959 and 1994. These notes provide valuable insights into judicial decision-making at our nation’s highest level. As explained on the SCOTUS Notes blog:

 

“U.S. Supreme Court justices cast votes in complete secrecy during weekly meetings, which only justices are allowed to attend. During these meetings, the justices discuss, deliberate, and make initial decisions on cases they have heard–many of which address the most important legal and policy issues in the U.S.. The written notes the justices themselves take during these meetings provide the only record of what was said and by whom.”

 

Project co-director Tim Johnson adds “I think law students will find that ‘understanding how the sausage is made’ is integral to understanding how and why SCOTUS makes the decisions it does. Without knowing what happens behind the scenes it is hard to really hard to have a fully accurate picture.”

 

In the future, the researchers plan to engage volunteers to transcribe notes of Justices Powell, Douglas, Marshall, Rehnquist and Warren. Upon completion of the project, scotusnotes.org will provide public access to images of the original pages as well as the transcriptions. 

 

For more information or to get involved with the project, visit the SCOTUS Notes page at https://www.zooniverse.org/projects/zooniverse/scotus-notes-behind-the-scenes-at-supreme-court-conference.


E-Threat: Imminent Danger in the Information Age

MJLST Staffer, Jacob Weindling

 

One of the basic guarantees of the First Amendment is the right to free speech. This right protects the individual from restrictions on speech by the government, but is often invoked as a rhetorical weapon against private individuals or organizations declining to publish another’s words. On the internet, these organizations include some of the most popular discussion platforms in the U.S. including Facebook, Reddit, Yahoo, and Twitter. A key feature of these organizations is their lack of government control. As recenty as 2017, the Supreme Court has identified First Amendment grounds for overturning prohibitions on social media access. Indeed, one of the only major government prohibitions on speech currently in force is the ban on child pornography. Violent rhetoric, meanwhile, continues to fall under the Constitutional protections identified by the Court.

Historically, the Supreme Court has taken a nuanced view of violent speech as it relates to the First Amendment. The Court held in Brandenburg v. Ohio that “the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Contrast this with discussion of a moral responsibility to resort to violence, which the Supreme Court has held to be distinct from preparing a group for imminent violent acts.

With the rise and maturation of the internet, public discourse has entered a new and relatively unchartered territory that the Supreme Court would have been hard-pressed to anticipate at the time of the Brandenburg and Noto decisions. Where once geography served to isolate Neo-Nazi groups and the Ku Klux Klan into small local chapters, the internet now provides a centralized meeting place for the dissemination and discussion of violent rhetoric. Historically, the Supreme Court concerned itself mightily with the distinction between an imminent call to action and a general discussion of moral imperatives, making clear delineations between the two.

The context of the Brandenburg decision was a pre-information age telecommunications regime. While large amounts of information could be transmitted around the world in relatively short order thanks to development of international commercial air travel, real-time communication was generally limited to telephone conversations between two individuals. An imminent call to action would require substantial real-world logistics, meetings, and preparation, all of which provide significant opportunities for detection and disruption by law enforcement. By comparison, internet forums today provide for near-instant communication between large groups of individuals across the entire world, likely narrowing the window that law enforcement would have to identify and act upon a credible, imminent threat.

At what point does Islamic State recruitment or militant Neo-Nazi organizing on the internet rise to the level of imminent threat? The Supreme Court has not yet decided the issue, many internet businesses have recently begun to take matters into their own hands. Facebook and Youtube have reportedly been more active in policing Islamic State propaganda, while Reddit has taken some steps to remove communities that advocate for rape and violence. Consequently, while the Supreme Court has not yet elected to draw (or redraw) a bright red line in the internet age, many businesses appear to be taking the first steps to draw the line themselves, on their terms.


Airbnb Regulations Spark Controversy, but Have Limited Effect on Super Bowl Market

MJLST Staffer, Sam Louwagie

 

As Super Bowl LII descends upon Minneapolis, many Twin Cities residents are hoping to receive a windfall by renting out their homes to visiting Eagles and Patriots fans. City regulations placed last fall on online short-term rental platforms such as AirBnB, which prompted an outcry from those platforms, do not appear to be having much of an effect on the dramatic surge in supply.

The short-term rental market in Minneapolis has been a renter’s market in the opening days since the Super Bowl matchup was set. There are 5,000 placements in the Twin Cities on AirBnB this week, as compared to 1,000 at this time last year, according to the Star Tribune. The flood of posted housing options has limited prices, as the average listing has cost $240 per night—more than usual, but much less than the thousands of dollars some would-be renters had hoped for. One homeowner told the Star Tribune that she had gotten no interest in her 4,000-square-foot, six-bedroom house just five blocks from U.S. Bank Stadium, and had “cut the price drastically.”

The surge in AirBnB listings comes despite ordinances that went into effect in December in both Minneapolis and St. Paul. The cities joined a growing list of major U.S. cities that are passing regulations aimed at ensuring guest safety and making a small cut of tax revenue from the rentals. Minneapolis’ ordinance requires a short-term renter to apply for a license with the city, which costs $46 annually. St. Paul’s license costs $40 per year. As of mid-December, according to MinnPost, only 18 applications had been submitted in Minneapolis and only 32 in St. Paul. That would suggest that many of the thousands of listings during Super Bowl week are likely unlicensed. The cities both say they will notify renters they are not in compliance before taking any enforcement action, but a violation will cost $500 in Minneapolis and $300 in St. Paul.

The online rental platforms themselves had strongly objected to the passage of the ordinances, which would require Airbnb to apply for a short-term rental platform license. This would bring a $10,000 annual fee in St. Paul and a $5,000 large platform fee in Minneapolis. According to MinnPost, as of mid-December, no platforms had submitted an application and it was “unclear whether they [would] comply.” Airbnb said in a statement that it believes the regulations violate the 1996 federal Communications Decency Act, and that “the ordinance violates the legal rights of Airbnb and its community.”

While the city ordinances created controversy in the legal world, they do not seem to be having a similar effect on the ground in Minneapolis, as Super Bowl guests still have a dramatic surplus of renting options.


Fi-ARRR-E & Fury: Why Even Reading the Pirated Copy of Michael Wolff’s New Book Is Probably Copyright Infringement

By Tim Joyce, MJLST EIC-Emeritus

 

THE SITUATION

Lately I’ve seen several Facebook links to a pirated copy of Fire & Fury: Inside the Trump White House, the juicy Michael Wolff expose documenting the first nine months of the President’s tenure. The book reportedly gives deep, behind-the-scenes perspectives on many of Mr. Trump’s most controversial actions, including firing James Comey and accusing President Obama of wiretapping Trump Tower.

 

It was therefore not surprising when Trump lawyers slapped a cease & desist letter on Wolff and his publisher. While there are probably volumes yet to be written about the merits of those claims (in my humble opinion: “sorry, bros, that’s not how defamation of a public figure works”), this blog post deals with the copyright implications of sharing and reading the pirated copy of the book, and the ethical quandaries it creates. I’ll start with the straightforward part.

 

THE APPLICABLE LAW

First, it should almost go without saying that the person who initially created the PDF copy of the 300+ page book broke the law. (Full disclosure: I did click on the Google link, but only to verify that it was indeed the book and not just a cover page. It was. Even including the page with copyright information!) I’ll briefly connect the dots for any copyright-novices reading along:

 

    • Wolff is the “author” of the book, a “literary work” that constitutes an “original works of authorship fixed in any tangible medium of expression” [see 17 USC 102’].
    • As the author, one of his copyrights is to control … well … copying. The US Code calls that “reproduction” [see 17 USC 106].
    • He also gets exclusive right to “display” the literary work “by means of a film, slide, television image, or any other device or process” [see 17 USC 101]. Basically, he controls display in any medium like, say, via a Google Drive folder.
    • Unauthorized reproduction, display, and/or distribution is called “infringement” [see 17 USC 501]. There are several specific exceptions carved into the copyright code for different types of creative works, uses, audiences, and other situations. But this doesn’t fall into one of those exceptions.

 

  • So, the anonymous infringer has broken the law.

 

  • [It’s not clear, yet, whether this person is also a criminal under 17 USC 506, because I haven’t seen any evidence of fraudulent intent or acting “for purposes of commercial advantage or private financial gain.”]

 

Next, anyone who downloads a copy of the book onto their smartphone or laptop is also an infringer. The same analysis applies as above, only with a different starting point. The underlying material’s copyright is still held by Wolff as the author. Downloading creates a “reproduction,” which is still unauthorized by the copyright owner. Unauthorized exercise of rights held exclusively by the author + no applicable exceptions = infringement.

 

Third, I found myself stuck as to whether I, as a person who had intentionally clicked through into the Google Drive hosting the PDF file, had also technically violated copyright law. Here, I hadn’t downloaded, but merely clicked the link which launched the PDF in a new Chrome tab. The issue I got hung up on was whether that had created a “copy,” that is a “material objects … in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” [17 USC 101]

 

Computer reproductions are tricky, in part because US courts lately haven’t exactly given clear guidance on the matter. (Because I was curious — In Europe and the UK, it seems like there’s an exception for temporary virtual copies, but only when incidental to lawful uses.) There’s some debate as to whether it’s infringement if only the computer is reading the file, and for a purpose different than perceiving the artistic expression. (You may remember the Google Books cases…) However, when it’s humans doing the reading, that “purpose of the copying” argument seems to fall by the wayside.

 

Cases like  Cartoon Network v. CSC Holdings have attempted to solve the problem of temporary copies (as when a new browser window opens), but the outcome there (i.e., temporary copies = ok) was based in part on the fact that the streaming service being sued had the right to air the media in question. Their copy-making was merely for the purposes of increasing speed and reducing buffering for their paid subscribers. Here, where the right to distribute the work is decidedly absent, the outcome seems like it should be the opposite. There may be a case out there that deals squarely with this situation, but it’s been awhile since copyright class (yay, graduation!) and I don’t have free access to Westlaw anymore. It’s the best I could do in an afternoon.

 

Of course, an efficient solution here would be to first crack down on the entities and individuals that first make the infringement possible – ISPs and content distributors. The Digital Millennium Copyright Act already gives copyright owners a process to make Facebook take bootleg copies of their stuff down. But that only solves half the problem, in my opinion. We have to reconcile our individual ethics of infringement too.

 

ETHICAL ISSUES, FOR ARTISTS IN PARTICULAR

One of the more troubling aspects of this pirateering that I saw was that the link-shares came from people who make their living in the arts. These are the folks who–rightly, in my opinion–rail against potential “employers” offering “exposure” instead of cold hard cash when they agree to perform. To expect to be paid for your art, while at the same time sharing an illegal copy of someone else’s, is logically inconsistent to me.

 

As a former theater actor and director (read: professional almost-broke person) myself, I can understand the desire to save a few dollars by reading the pirated copy. The economics of making a living performing are tough – often you agree to take certain very-low-paying artistic jobs as loss-leaders toward future jobs. But I have only met a very few of us willing to perform for free, and even fewer who would tolerate rehearsing with the promise of pay only to be stiffed after the performance is done. That’s essentially what’s happening when folks share this bootleg copy of Michael Wolff’s book.

 

I’ve heard some relativistic views on the matter, saying that THIS book containing THIS information is so important NOW, that a little infringement shouldn’t matter. But you could argue that Hamilton, the hit musical about the founding of our nation and government, has equally urgent messages regarding democracy, totalitarianism, individual rights, etc. Should anyone, therefore, be allowed to just walk into the theater and see the show without paying? Should the cast be forced to continue performing even when there is no longer ticket revenue flowing to pay for their efforts? I say that in order to protect justice at all times, we have to protect justice this time.

 

tl;dr

Creating, downloading, and possibly even just viewing the bootleg copy of Michael Wolff’s book linking around Facebook is copyright infringement. We cannot violate this author’s rights now if we expect to have our artistic rights protected tomorrow.

 

Contact Me!

These were just some quick thoughts, and I’m sure there’s more to say on the matter. If you’d like to discuss any copyright issues further, I’m all ears.


Fi-ARRR-E & Fury: Why Even Reading the Pirated Copy of Michael Wolff’s New Book Is Probably Copyright Infringement

By Tim Joyce, MJLST EIC-Emeritus

 

THE SITUATION

Lately I’ve seen several Facebook links to a pirated copy of Fire & Fury: Inside the Trump White House, the juicy Michael Wolff expose documenting the first nine months of the President’s tenure. The book reportedly gives deep, behind-the-scenes perspectives on many of Mr. Trump’s most controversial actions, including firing James Comey and accusing President Obama of wiretapping Trump Tower.

 

It was therefore not surprising when Trump lawyers slapped a cease & desist letter on Wolff and his publisher. While there are probably volumes yet to be written about the merits of those claims (in my humble opinion: “sorry, bros, that’s not how defamation of a public figure works”), this blog post deals with the copyright implications of sharing and reading the pirated copy of the book, and the ethical quandaries it creates. I’ll start with the straightforward part.

 

THE APPLICABLE LAW

First, it should almost go without saying that the person who initially created the PDF copy of the 300+ page book broke the law. (Full disclosure: I did click on the Google link, but only to verify that it was indeed the book and not just a cover page. It was. Even including the page with copyright information!) I’ll briefly connect the dots for any copyright-novices reading along:

 

    • Wolff is the “author” of the book, a “literary work” that constitutes an “original works of authorship fixed in any tangible medium of expression” [see 17 USC 102’].
    • As the author, one of his copyrights is to control … well … copying. The US Code calls that “reproduction” [see 17 USC 106].
    • He also gets exclusive right to “display” the literary work “by means of a film, slide, television image, or any other device or process” [see 17 USC 101]. Basically, he controls display in any medium like, say, via a Google Drive folder.
    • Unauthorized reproduction, display, and/or distribution is called “infringement” [see 17 USC 501]. There are several specific exceptions carved into the copyright code for different types of creative works, uses, audiences, and other situations. But this doesn’t fall into one of those exceptions.

 

  • So, the anonymous infringer has broken the law.

 

  • [It’s not clear, yet, whether this person is also a criminal under 17 USC 506, because I haven’t seen any evidence of fraudulent intent or acting “for purposes of commercial advantage or private financial gain.”]

 

Next, anyone who downloads a copy of the book onto their smartphone or laptop is also an infringer. The same analysis applies as above, only with a different starting point. The underlying material’s copyright is still held by Wolff as the author. Downloading creates a “reproduction,” which is still unauthorized by the copyright owner. Unauthorized exercise of rights held exclusively by the author + no applicable exceptions = infringement.

 

Third, I found myself stuck as to whether I, as a person who had intentionally clicked through into the Google Drive hosting the PDF file, had also technically violated copyright law. Here, I hadn’t downloaded, but merely clicked the link which launched the PDF in a new Chrome tab. The issue I got hung up on was whether that had created a “copy,” that is a “material objects … in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” [17 USC 101]

 

Computer reproductions are tricky, in part because US courts lately haven’t exactly given clear guidance on the matter. (Because I was curious — In Europe and the UK, it seems like there’s an exception for temporary virtual copies, but only when incidental to lawful uses.) There’s some debate as to whether it’s infringement if only the computer is reading the file, and for a purpose different than perceiving the artistic expression. (You may remember the Google Books cases…) However, when it’s humans doing the reading, that “purpose of the copying” argument seems to fall by the wayside.

 

Cases like  Cartoon Network v. CSC Holdings have attempted to solve the problem of temporary copies (as when a new browser window opens), but the outcome there (i.e., temporary copies = ok) was based in part on the fact that the streaming service being sued had the right to air the media in question. Their copy-making was merely for the purposes of increasing speed and reducing buffering for their paid subscribers. Here, where the right to distribute the work is decidedly absent, the outcome seems like it should be the opposite. There may be a case out there that deals squarely with this situation, but it’s been awhile since copyright class (yay, graduation!) and I don’t have free access to Westlaw anymore. It’s the best I could do in an afternoon.

 

Of course, an efficient solution here would be to first crack down on the entities and individuals that first make the infringement possible – ISPs and content distributors. The Digital Millennium Copyright Act already gives copyright owners a process to make Facebook take bootleg copies of their stuff down. But that only solves half the problem, in my opinion. We have to reconcile our individual ethics of infringement too.

 

ETHICAL ISSUES, FOR ARTISTS IN PARTICULAR

One of the more troubling aspects of this pirateering that I saw was that the link-shares came from people who make their living in the arts. These are the folks who–rightly, in my opinion–rail against potential “employers” offering “exposure” instead of cold hard cash when they agree to perform. To expect to be paid for your art, while at the same time sharing an illegal copy of someone else’s, is logically inconsistent to me.

 

As a former theater actor and director (read: professional almost-broke person) myself, I can understand the desire to save a few dollars by reading the pirated copy. The economics of making a living performing are tough – often you agree to take certain very-low-paying artistic jobs as loss-leaders toward future jobs. But I have only met a very few of us willing to perform for free, and even fewer who would tolerate rehearsing with the promise of pay only to be stiffed after the performance is done. That’s essentially what’s happening when folks share this bootleg copy of Michael Wolff’s book.

 

I’ve heard some relativistic views on the matter, saying that THIS book containing THIS information is so important NOW, that a little infringement shouldn’t matter. But you could argue that Hamilton, the hit musical about the founding of our nation and government, has equally urgent messages regarding democracy, totalitarianism, individual rights, etc. Should anyone, therefore, be allowed to just walk into the theater and see the show without paying? Should the cast be forced to continue performing even when there is no longer ticket revenue flowing to pay for their efforts? I say that in order to protect justice at all times, we have to protect justice this time.

 

tl;dr

Creating, downloading, and possibly even just viewing the bootleg copy of Michael Wolff’s book linking around Facebook is copyright infringement. We cannot violate this author’s rights now if we expect to have our artistic rights protected tomorrow.

 

Contact Me!

These were just some quick thoughts, and I’m sure there’s more to say on the matter. If you’d like to discuss any copyright issues further, I’m all ears.


Congress, Google Clash Over Sex-Trafficking Liability Law

Samuel Louwagie, MJLST Staffer

Should web companies be held liable when users engage in criminal sex trafficking on the platforms they provide? Members of both political parties in Congress are pushing to make the answer to that question yes, over the opposition of tech giants like Google.

The Communications Decency Act was enacted in 1934. In the early 1990s, as the Internet went live, Congress added Section 230 to the act. That provision protected providers of web platforms from civil liability for content posted by users of those platforms. The act states that in order to “promote the continued development of the internet . . . No provider of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That protection, according to the ACLU, “defines Internet culture as we know it.”  

Earlier this month, Congress debated an amendment to Section 230 called the Stop Enabling Sex Traffickers Act of 2017. The act would remove that protection from web platforms that knowingly allow sex trafficking to take place. The proposal comes after the First Circuit Court of Appeals held in March of 2016 that even though Backpage.com played a role in trafficking underage girls, section 230 protected it from liability. Sen. Rob Portman, a co-sponsor of the bill, wrote that it is Congress’ “responsibility to change this law” while “women and children have . . . their most basic rights stripped from them.” And even some tech companies, such as Oracle, have supported the bill.

Google, meanwhile, has resisted such emotional pleas. Its lobbyists have argued that Backpage.com could be criminally prosecuted, and that to remove core protections from internet companies will damage the free nature of the web. Critics, such as New York Times columnist Nicholas Kristof, argue the Stop Enabling Sex Traffickers Act was crafted “exceedingly narrowly to target those intentionally engaged in trafficking children.”

The bill has bipartisan support and appears to be gaining steam. The Internet Association, a trade group including Google and Facebook, expressed a willingness at a Congressional hearing to supporting “targeted amendments” to the Communications Decency Act. Whether Google likes it or not, eventually platforms will be at legal risk if they don’t police their content for sex trafficking.