Social Media

Privacy, Public Facebook Posts, and the Medicalization of Everything

Peter J. Teravskis, MD/JD Candidate, MJLST Staffer

Medicalization is “a process by which human problems come to be defined and treated as medical problems.” Medicalization is not a formalized process, but is instead “a social meaning embedded within other social meanings.” As the medical domain has expanded in recent years scholars have begun to point to problems with “over-medicalization” or “corrupted medicalization.” Specifically, medicalization is used to describe “the expansion of medicine in people’s lives.” For example, scholars have problematized the medicalization of obesity, shynesshousing, poverty, normal aging, and even dying, amongst many others. The process of medicalization has become so pervasive in recent years that various sociologists have begun to discuss it as the medicalization “of everyday life,” “of society,”  “of culture,” of the human condition, and “the medicalization of everything”—i.e. turning all human difference into pathology. Similarly, developments in “technoscientific biomedicine” have led scholars to blur the line of what is exclusively “medical” into a broader process of “biomedicalization.”

Medicalization does not carry a valence of “good” or “bad” per se: medicalization and demedicalization can both restrict and expand personal liberties. However, when everyday living is medicalized there are many attendant problems. First, medicalization places problems outside a person’s control: rather than the result of choice, personality, or character, a medicalized problem is considered biologically preordained or “curable.” Medicalized human differences are no longer considered normal; therefore, “treatment” becomes a “foregone conclusion.” Because of this, companies are incentivized to create pharmacological and biotechnological solutions to “cure” the medicalized problem. From a legal perspective, Professor Adele E. Clarke and colleagues note that through medicalization, “social problems deemed morally problematic . . . [are] moved from the professional jurisdiction of the law to that of medicine.” This process is referred to, generally, as the “medicalization of deviance.” Further, medicalization can de-normalize aspects of the human condition and classify people as “diseased.”

Medicalization is important to the sociological study of social control. Social control is defined as the “mechanisms, in the form of patterns of pressure, through which society maintains social order and cohesion.” Thus, once medicalized, an illness is subject to control by medicinal interventions (drugs, surgery, therapy, etc.) and a sick people are expected to take on the “sick role” whereby they become the subjects of physicians’ professional control. A recent example of medical social control is the social pressure to engage in hygienic habits, precautionary measures, and “social distancing” in response to the novel coronavirus, COVID-19. The COVID-19 pandemic is an expressly medical problem; however, when normal life, rather than a viral outbreak, is medicalized, medical social control becomes problematic. For example, the sociologist Peter Conrad argues that medical social control can take the form of “medical surveillance.” He states that “this form of medical social control suggests that certain conditions or behaviors become perceived through a ‘medical gaze’ and that physicians may legitimately lay claim to all activities concerning the condition” (quoting Michel Foucault’s seminal book The Birth of the Clinic).

The effects of medical social control are amplified due to the communal nature of medicine and healthcare, leading to “medical­legal hybrid[]” social control and, I argue, medical-corporate social control. For example, employers and insurers have interests in encouraging healthful behavior when it reduces members’ health care costs. Similarly, employers are interested in maximizing healthy working days, decreasing worker turnover, and maximizing healthy years, thus expanding the workforce. The State has similar interests, as well as interests in reducing end-of-life and old age medical costs. At first glance, this would seem to militate against overmedicalization. However, modern epidemiological methods have revealed the long term consequences of untreated medical problems. Thus, medicalization may result in the diversion of health care dollars towards less expensive preventative interventions and away from more expensive therapy that would help later in life.

An illustrative example is the medicalization of obesity. Historically, obesity was not considered a disease but was a socially desirable condition: demonstrating wealth; the ability to afford expensive, energy-dense foods; and a life of leisure rather than manual labor. Changing social norms, increased life expectancy, highly sensitive biomedical technologies for identifying subtle metabolic changes in blood chemistry, and population-level associations between obesity and later-life health complications have contributed to the medicalization of this conditions. Obesity, unlike many other conditions, it not attributable to a single biological process, rather, it is hypothesized to result from the contribution of multiple genetic and environmental factors. As such, there is no “silver bullet” treatment for obesity. Instead, “treatment” for obesity requires profound changes reaching deep into how a patient lives her life. Many of these interventions have profound psychosocial implications. Medicalized obesity has led, in part, to the stigmatization of people with obesity. Further, medical recommendations for the treatment of obesity, including gym membership, and expensive “health” foods, are costly for the individual.

Because medicalized problems are considered social problems affecting whole communities, governments and employers have stepped in to treat the problem. Politically, the so-called “obesity epidemic” has led to myriad policy changes and proposals. Restrictions designed to combat the obesity epidemic have included taxes, bans, and advertising restrictions on energy-dense food products. On the other hand, states and the federal government have implemented proactive measures to address obesity, for example public funds have been allocated to encourage access to and awareness of “healthy foods,” and healthy habits. Further, Social Security Disability, Medicare and Medicaid, and the Supplemental Nutrition Assistance Program have been modified to cope with economic and health effects of obesity.

Other tools of control are available to employers and insurance providers. Most punitively, corporate insurance plans can increase rates for obese employees.  As Abby Ellin, writing for Observer, explained “[p]enalizing employees for pounds is perfectly legal [under the Affordable Care Act]” (citing a policy brief published in the HealthAffairs journal). Alternatively, employers and insurers have paid for or provided incentives for gym memberships and use, some going so far as to provide exercise facilities in the workplace. Similarly, some employers have sought to modify employee food choices by providing or restricting food options available in the office. The development of wearable computer technologies has presented another option for enforcing obesity-focused behavioral control. Employer-provided FitBits are “an increasingly valuable source of workforce health intelligence for employers and insurance companies.” In fact, Apple advertises Apple Watch to corporate wellness divisions and various media outlets have noted how Apple Watch and iPhone applications can be used by employers for health surveillance.

Indeed, medicalization as a pretense for technological surveillance and social control is not exclusively used in the context of obesity prevention. For instance, the medicalization of old age has coincided with the technological surveillance of older people. Most troubling, medicalization in concert with other social forces have spawned an emerging field of technological surveillance of mental illness. Multiple studies, and current NIH-funded research, are aimed at developing algorithms for the diagnosis of mental illness based on data mined from publicly accessible social media and internet forum posts. This process is called “social media analysis.” These technologies are actively medicalizing the content of digital communications. They subject peoples’ social media postings to an algorithmic imitation of the medical gaze, whereby, “physicians may legitimately lay claim to” those social media interactions.  If social media analysis performs as hypothesized, certain combinations of words and phrases will constitute evidence of disease. Similar technology has already been coopted as a mechanism of social control to detect potential perpetrators of mass shootings. Policy makers have already seized upon the promise of medical social media analysis as a means to enforce “red flag” laws. Red flag laws “authorize courts to issue a special type of protection order, allowing the police to temporarily confiscate firearms from people who are deemed by a judge to be a danger to themselves or to others.” Similarly, it is conceivable that this type of evidence will be used in civil commitment proceedings. If implemented, such programs would constitute a link by which medical surveillance, under the banner of medicalization, could be used as grounds to deprive individuals of civil liberty, demonstrating an explicit medical-legal hybrid social control mechanism.

What protections does the law offer? The Fourth Amendment protects people from unreasonable searches. To determine whether a “search” has occurred courts ask whether the individual has a “reasonable expectation of privacy” in the contents of the search. Therefore, whether a person had a reasonable expectation of privacy in publicly available social media data is critical to determining whether that data can be used in civil commitment proceedings or for red flag law protective orders.

Public social media data is, obviously, public, so courts have generally held that individuals have no reasonable expectation of privacy in its contents. By contrast, the Supreme Court has ruled that individuals have a reasonable expectation of privacy in the data contained on their cell phones and personal computers, as well as their personal location data (cell-site location information) legally collected by third party cell service providers. Therefore, it is an open question how far a person’s reasonable expectation of privacy extends in the case of digital information. Specifically, when public social media data is used for medical surveillance and making psychological diagnoses the legal calculation may change. One interpretation of the “reasonable expectation of privacy” test argues that it is an objective test—asking whether a reasonable person would actually have a privacy interest. Indeed, some scholars have suggested using polling data to define the perimeter of Fourth Amendment protections. In that vein, an analysis of the American Psychiatric Association’s “Goldwater Rule” is illustrative.

The Goldwater Rule emerged after the media outlet “Fact” published psychiatrists’ medical impressions of 1964 presidential candidate Barry Goldwater. Goldwater filed a libel suit against Fact, and the jury awarded him $1.00 in compensatory damages and $75,000 in punitive damages resulting from the publication of the psychiatric evaluations. None of the quoted psychiatrists had met or examined Goldwater in person. Subsequently, concerned primarily about the inaccuracies of “diagnoses at a distance,” the APA adopted the Goldwater Rule, prohibiting psychiatrists from engaging in such practices. It is still in effect today.

The Goldwater Rule does not speak to privacy per se, but it does speak to the importance of personal, medical relationships between psychiatrists and patients when arriving at a diagnosis. Courts generally treat those types of relationships as private and protect them from needless public exposure. Further, using social media surveillance to diagnose mental illness is precisely the type of diagnosis-at-a-distance that concerns the APA. However, big-data techniques promise to obviate the diagnostic inaccuracies the 1960s APA was concerned with.

The jury verdict in favor of Goldwater is more instructive. While the jury found only nominal compensatory damages, it nevertheless chose to punish Fact magazine. This suggests that the jury took great umbrage with the publication of psychiatric diagnoses, even though they were obtained from publicly available data. Could this be because psychiatric diagnoses are private? The Second Circuit, upholding the jury verdict, noted that running roughshod over privacy interests is indicative of malice in cases of libel. Under an objective test, this seems to suggest that subjecting public information to the medical gaze, especially the psychiatrist’s gaze, unveils information that is private. In essence, applying big-data computer science techniques to public posts unveils or reveals private information contained in the publicly available words themselves. Even though the public social media posts are not subject to a reasonable expectation of privacy, a psychiatric diagnosis based on those words may be objectively private. In sum, the medicalization and medical surveillance of normal interactions on social media may create a Fourth Amendment privacy interest where none previously existed.


Zoinks! Can the FTC Unmask Advertisements Disguised by Social Media Influencers?

Jennifer Satterfield, MJLST Staffer

Social media sites like Instagram and YouTube are filled with people known as “influencers.” Influencers are people with a following on social media that use their online fame to promote products and services of a brand. But, with all that power comes great responsibility, and influencers, as a whole, are not being responsible. One huge example of irresponsible influencer activity is the epic failure and fraudulent music festival known as Fyre Festival. Although Fyre Festival promised a luxury, VIP experience on a remote Bahamian island, it was a true nightmare where “attendees were stranded with half-built huts to sleep in and cold cheese sandwiches to eat.” The most prominent legal action was against Fyre’s founders and organizers, Billy McFarland and Ja Rule, including a six-year criminal sentence for wire fraud against McFarland. Nonetheless, a class action lawsuit also targeted the influencers. According to the lawsuit, the influencers did not comply with Federal Trade Commission (“FTC”) guidelines and disclose they were being paid to advertise the festival. Instead, “influencers gave the impression that the guest list was full of the Social Elite and other celebrities.” Yet, the blowback against influencers since the Fyre Festival fiasco appears to be minimal.

According to a Mediakix report, “[i]n one year, a top celebrity will post an average of 58 sponsored posts and only 3 may be FTC compliant.” The endorsement guidelines specify that if there is a “material connection” between the influencer and the seller of an advertised product, this connection must be fully disclosed. The FTC even created a nifty guide for influencers to ensure compliance. While disclosure is a small burden and there are several resources informing influencers of their duty to disclose, these guidelines are still largely ignored.

Evens so, the FTC has sent several warning letters to individual influencers over the years, which indicates it is monitoring top influencers’ posts. However, a mere letter is not doing much to stop the ongoing, flippant, and ignorant disregard toward the FTC guidelines. Besides the letters, the FTC rarely takes action against individual influencers. Instead, if the FTC goes after a bad actor, “it’s usually a brand that[] [has] failed to issue firm disclosure guidelines to paid influencers.” Consequently, even though it appears as if the FTC is cracking down on influencers, it is really only going after the companies. Without actual penalties, it is no wonder most influencers are either unaware of the FTC guidelines or continue to blatantly ignore them.

Considering this problem, there is a question of what the FTC can really do about it. One solution is for the FTC to dig in and actually enforce its guidelines against influencers like it did in 2017 with CSGO Lotto and two individual influencers, Trevor Martin and Thomas Cassell. CSGO Lotto was a website in which users could gamble virtual items called “skins” from the game Counter-Strike: Global Offensive. According to the FTC’s complaint, Martin and Thomas endorsed CSGO Lotto but failed to disclose they were both the owners and officers of the company. CSGO Lotto also paid other influencers to promote the website. The complaint notes that numerous YouTube videos by these influencers either failed to include a sponsorship disclosure in the videos or inconspicuously placed such disclosures “below the fold” in the description box. While the CSGO Lotto action was a huge scandal in the video game industry, it was not widely publicized to the general population. Moreover, Martin and Cassell got away with a mere slap on the wrist—“[t]he [FTC] order settling the charges requires Martin and Cassell to clearly and conspicuously disclose any material connections with an endorser or between an endorser and any promoted product or service.” Thus, it was not enough to compel other influencers into compliance. Instead, if the FTC started enforcement actions against big-name influencers, other influencers may also fear retribution and comply.

On the other hand, the FTC could continue its enforcement against the companies themselves, but this time with more teeth. Currently, the FTC is preparing to take further steps to ensure consumer protection in the world of social media influencers. Recently, FTC Commissioner Rohit Chopra acknowledged in a public statement that “it is not clear whether our actions are deterring misconduct in the marketplace, due to the limited sanctions we have pursued.” Although Chopra is not interested in pursuing small influencers, but rather the advertisers that pay them, it is possible that enforcement against the companies will cause influencers to comply as well.

Accordingly, Chopra’s next steps include: (1) “[d]eveloping requirements for technology platforms (e.g. Instagram, YouTube, and TikTok) that facilitate and either directly or indirectly profit from influencer marketing;” (2) “[c]odifying elements of the existing endorsement guides into formal rules so that violators can be liable for civil penalties under Section 5(m)(1)(A) and liable for damages under Section 19; 7;” and (3) “[s]pecifying the requirements that companies must adhere to in their contractual arrangements with influencers, including through sample terms that companies can include in contracts.” By pushing some of the enforcement duties onto social media platforms themselves, the FTC gains more monitoring and enforcement capabilities. Furthermore, codifying the guidelines into formal rules gives the FTC teeth to impose civil penalties and creates tangible consequences for those who previously ignored the guidelines. Finally, by actually requiring companies to adhere to these rules via their contract with influencers, influencers will be compelled to follow the guidelines as well. Therefore, under these next steps, paid advertising disclosures on social media can become commonplace. But only time will really tell if the FTC will achieve these steps.


Google Fined for GDPR Non-Compliance, Consumers May Not Like the Price

Julia Lisi, MJLST Staffer

On January 14th, 2019, France’s Data Protection Authority (“DPA”) fined Google 50 million euros in one of the first enforcement actions taken under the EU’s General Data Protection Regulation (“GDPR”). The GDPR, which took effect in May of 2018, sent many U.S. companies scrambling in attempts to update their privacy policies. You, as a consumer, probably had to re-accept updated privacy policies from your social media accounts, phones, and many other data-based products. Google’s fine makes it the first U.S. tech giant to face GDPR enforcement. While a 50 million euro (roughly 57 million dollars) fine may sound hefty, it is actually relatively small compared to maximum fine allowed under the GDPR, which, for Google, would be roughly five billion dollars.

The French fine clarifies a small portion of the uncertainty surrounding GDPR enforcement. In particular, the French DPA rejected Google’s methods for getting consumers to consent to its  Privacy Policy and Terms of Service. The French DPA took issue with the (1) numerous steps users faced before they could opt out of Google’s data collection, (2) the pre-checked box indicating users’ consent, and (3) the inability of users to consent to individual data processes, instead requiring whole cloth acceptance of both Google’s Privacy Policy and Terms of Service.

The three practices rejected by the French DPA are commonplace in the lives of many consumers. Imagine turning on your new phone for the first time and scrolling through seemingly endless provisions detailing exactly how your daily phone use is tracked and processed by both the phone manufacturer and your cell provider. Imagine if you had to then scroll through the same thing for each major app on your phone. You would have much more control over your digital footprint, but would you spend hours reading each provision of the numerous privacy policies?

Google’s fine could mark the beginning of sweeping changes to the data privacy landscape. What once took a matter of seconds—e.g., checking one box consenting to Terms of Service—could now take hours. If Google’s fine sets a precedent, consumers could face another wave of re-consenting to data use policies, as other companies fall in line with the GDPR’s standards. While data privacy advocates may applaud the fine as the dawn of a new day, it is unclear how the average consumer will react when faced with an in-depth consent process.


Tinder Shows Discrimination Can Take All Shapes in the Internet Age

Caleb Holtz, MJLST Staffer

On January 20th Tinder Inc., the company responsible for the popular dating mobile app, filed a proposed settlement agreement worth over $17 million. The settlement seeks to settle claims that Tinder charged older users more to use the app solely because of their age. Interestingly, while many people think of age discrimination against a group for being too old as being solely the concern of AARP members, this discrimination was against people over the age of 29. This is because of the relatively low threshold in California as to what can constitute age discrimination under California civil rights and consumer protection laws.

Discrimination is incredibly common in the Internet age, at least partially because it is so easy to do. Internet users develop a digital “fingerprint” over time and usage which follows them from website to website. Data contained within a digital “fingerprint” can contain information from “websites you visit, social platforms you use, searches you perform, and content you consume.” Digital fingerprinting is becoming even more common, as enterprising trackers have discovered a way to track users across multiple different browsing applications. When this information is combined with data users willfully give out on the internet, such as personal data on Facebook or Tinder, it is incredibly easy for companies to create a profile of all of the users relevant characteristics. From there it is easy to choose on what grounds to distinguish, or discriminate, users.

Discrimination in this manner is not always necessarily bad. On the most positive end of the spectrum, institutions like banks can use the information to discern if the wrong person is trying to access an account, based on the person’s digital fingerprint. More commonly, internet companies use the data to discriminate against users, controlling what they see and the price they are offered. A quintessential example of this practice was the study that found travel websites show higher prices to Mac users than PC users. Advocates of the practice argue that it allows companies to customize the user experience on an individual basis, allowing the user to see only what they want to see. They also say that it allows businesses to maximize efficiency, both in terms of maximizing profits and in terms of catering to the customer flow, which would therefore lead to a better user experience in the long run. To this point, the argument in favor of continuing this practice has generally won out, as it remains generally legal in the United States.

Opponents of the practice however say the costs outweigh the benefits. Many people, when shown just how much personal data follows them around the internet, will find the practice “creepy”. Opponents hope they can spread this general sentiment by showing more people just how much of their data is online without their explicit consent. This idea has support because, “despite its widespread use, price discrimination is largely happening without the knowledge of the general public, whose generally negative opinion of the practice has yet to be heard.”

More serious opponents move past the “creepiness” and into the legal and ethical issues that can pop up. As the Tinder case demonstrates, online discrimination can take an illegal form, violating state or federal law. Discrimination can also be much more malicious, allowing for companies looking for new employees to choose who even sees the job opening, based on factors like race, age, or gender. As Gillian B. White recently summarized nicely, “while it’s perfectly legal to advertise men’s clothing only to men, it’s completely illegal to advertise most jobs exclusively to that same group.” Now, as the Tinder case demonstrates, in certain scenarios it may be illegal to discriminate in pricing as well as job searches.

So what can be done about this, from a legal perspective? Currently in the United States the main price discrimination laws, the Sherman Antitrust Act, the Clayton Act, and the Robinson-Patman Act were created long before the advent of the internet, and allow for price discrimination as long as there is a “good faith reason”. (Part of the trouble Tinder ran into in litigation is a judge’s finding that there was not a good faith reason to discriminate as they were). There are also a plethora of discrimination in hiring laws which make certain discrimination by hiring employers illegal. Therefore the best current option may be for internet watchdog groups to keep a keen eye out for these practices and report what they come across.

As far as how the law can be changed, an interesting option exists elsewhere in the world. European Union data privacy laws may soon make some price discrimination illegal, or at the very least, significantly more transparent so users are aware of how their data is being used. Perhaps by similarly shining sunlight on the issue here in the states, consumers will begin forcing companies to change their practices.


Controversial Anti-Sex Trafficking Bill Eliminates Safe-Harbor for Tech Companies

Maya Digre, MJLST Staffer

 

Last week the U.S. Senate voted to approve the Stop Enabling Sex Traffickers Act. The U.S. House of Representatives also passed a similar bill earlier this year. The bill creates an exception to Section 230 of the Communications Decency Act that allows victims of sex trafficking to sue websites that enabled their abuse. The bill was overwhelmingly approved in both the U.S. House and Senate, receiving 388-25 and 97-2 votes respectively. President Trump has indicated that he is likely to sign the bill.

 

Section 230 of the Communications Decency Act shields websites from liability stemming from content posted by third parties on their sites. Many tech companies argue that this provision has allowed them to become successful without a constant threat of liability. However, websites like Facebook, Google, and Twitter have recently received criticism for the role they played in unintentionally meddling in the 2016 presidential election. Seemingly the “hands off” approach of many websites has become a problem that Congress now seeks to address, at least with respect to sex trafficking.

 

The proposed exception would expose websites to liability if they “knowingly” assist, support, or facilitate sex trafficking. The bill seeks to make websites more accountable for posts on their site, discouraging a “hands off” approach.

 

While the proposed legislation has received bipartisan support from congress, it has been quite controversial in many communities. Tech companies, free-speech advocates, and consensual sex workers all argue that the bill will have unintended adverse consequences. The tech companies and free-speech advocates argue that the bill will stifle speech on the internet, and force smaller tech companies out of business for fear of liability. Consensual sex workers argue that this bill will shut down their online presence, forcing them to engage in high-risk street work. Other debates center on how the “knowingly” standard will affect how websites are run. Critics argue that, in response to this standard, “[s]ites will either censor more content to lower risk of knowing about sex trafficking, or they will dial down moderation in an effort not to know.” At least one website has altered their behavior in the wake of this bill. In response to this legislation Craigslist has remove the “personal ad” platform from their website.

 


Judicial Interpretation of Emojis and Emoticons

Kirk Johnson, MJLST Staffer

 

In 2016, the original 176 emojis created by Shigetaka Kurita were enshrined in New York’s Museum of Modern Art as just that: art. Today, a smartphone contains approximately 2,000 icons that many use as a communication tool. New communicative tools present new problems for users and the courts alike; when the recipient of a message including an icon interprets the icon differently than the sender, how should a court view that icon? How does it affect the actus reus or mens rea of a crime? While a court has a myriad of tools that they use to decipher the meaning of new communicative tools, the lack of a universal understanding of these icons has created interesting social and legal consequences.

The first of many problems with the use of an emoji is that there is general disagreement on what the actual icon means. Take this emoji for example: 🙏. In a recent interview by the Wall Street Journal, people aged 10-87 were asked what this symbol meant. Responses varied from hands clapping to praying. The actual title of the emoji is “Person with Folded Hands.”

Secondly, the icons can change over time. Consider the update of the Apple iOS from 9 to 10; many complained that this emoji, 💁, lost its “sass.” It is unclear whether the emoji was intended to have “sass” to begin with, especially since the title of the icon is “Information Desk Person.”

Finally, actual icons vary from device to device. In some instances, when an Apple iPhone user sends a message to an Android phone user, the icon that appears on the recipient’s screen is completely different than what the sender intended. When Apple moved from iOS 9 to iOS 10, they significantly altered their pistol emoji. While an Android user would see something akin to this 🔫, an iPhone user sees a water pistol. Sometimes, an equivalent icon is not present on the recipient’s device and the only thing that appears on their screen is a black box.

Text messages and emails are extremely common pieces of evidence in a wide variety of cases, from sexual harassment litigation to contract disputes. Recently, the Ohio Court of Appeals was called upon to determine whether the text message “come over” with a “winky-face emoji” was adequate evidence to prove infidelity. State v. Shepherd, 81 N.E.3d 1011, 1020 (Ohio Ct. App. 2017). A Michigan sexual harassment attorney’s client was convinced that an emoji that looked like a horse followed by an icon resembling a muffin meant “stud muffin,” which the client interpreted as an unwelcome advance from a coworker. Luckily, messages consisting entirely of icons rarely determine the outcome of a case on their own; in the sexual harassment arena, a single advance from an emoji message would not be sufficient to make a case.

However, the implications are much more dangerous in the world of contracts. According to the Restatement (Second) of Contracts § 20 (1981),

(1) There is no manifestation of mutual assent to an exchange if the parties attach materially different meanings to their manifestations and

(a) neither party knows or has reason to know the meaning attached by the other; or

(b) each party knows or each party has reason to know the meaning attached by the other.

(2) The manifestations of the parties are operative in accordance with the meaning attached to them by one of the parties if

(a) that party does not know of any different meaning attached by the other, and the other knows the meaning attached by the first party; or

(b) that party has no reason to know of any different meaning attached by the other, and the other has reason to know the meaning attached by the first party.

 

Adhering to this standard with emojis would produce varied and unexpected results. For example, if Adam sent Bob a message “I’ll give you $5 to mow my lawn 😉,” would Bob be free to accept the offer? Would the answer be different if Adam used the 😘 emoji instead of the 😉 emoji? What if Bob received a black box instead of any emoji at all? Conversely, if Adam sent Bob the message without an emoji and Bob replied to Adam “Sure 😉,” should Adam be able to rely upon Bob’s message as acceptance? In 2014, the Michigan Court of Appeals ruled that the emoticon “:P” denoted sarcasm and that the text prior to the message should be interpreted with sarcasm. Does this extend to the emoji 😜😝, and 😛, titled “Face with Stuck-Out Tongue And Winking Eye,” “Face With Stuck-Out Tongue And Tightly-Closed Eyes,” and “Face With Stuck-Out Tongue” respectively?

In a recent case in Israel, a judge ruled that the message “✌👯💃🍾🐿☄constituted acceptance of a rental contract. While the United States does have differing standards for the laws of contracts, it seems that a judge could find that to be acceptance under the Restatement of Contracts (Second) § 20(2). Eric Goldman at the Santa Clara University School of Law hypothesizes that an emoji dictionary might help alleviate this issue. While a new Black’s Emoji Law Dictionary may seem unnecessary to many, without some sort of action it will be the courts deciding what the meaning of an emoji truly is. In a day where courts rule that a jury is entitled to actually see the emoji rather than have a description read to them, we can’t ignore the reality that action is necessary.


E-Threat: Imminent Danger in the Information Age

MJLST Staffer, Jacob Weindling

 

One of the basic guarantees of the First Amendment is the right to free speech. This right protects the individual from restrictions on speech by the government, but is often invoked as a rhetorical weapon against private individuals or organizations declining to publish another’s words. On the internet, these organizations include some of the most popular discussion platforms in the U.S. including Facebook, Reddit, Yahoo, and Twitter. A key feature of these organizations is their lack of government control. As recenty as 2017, the Supreme Court has identified First Amendment grounds for overturning prohibitions on social media access. Indeed, one of the only major government prohibitions on speech currently in force is the ban on child pornography. Violent rhetoric, meanwhile, continues to fall under the Constitutional protections identified by the Court.

Historically, the Supreme Court has taken a nuanced view of violent speech as it relates to the First Amendment. The Court held in Brandenburg v. Ohio that “the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Contrast this with discussion of a moral responsibility to resort to violence, which the Supreme Court has held to be distinct from preparing a group for imminent violent acts.

With the rise and maturation of the internet, public discourse has entered a new and relatively unchartered territory that the Supreme Court would have been hard-pressed to anticipate at the time of the Brandenburg and Noto decisions. Where once geography served to isolate Neo-Nazi groups and the Ku Klux Klan into small local chapters, the internet now provides a centralized meeting place for the dissemination and discussion of violent rhetoric. Historically, the Supreme Court concerned itself mightily with the distinction between an imminent call to action and a general discussion of moral imperatives, making clear delineations between the two.

The context of the Brandenburg decision was a pre-information age telecommunications regime. While large amounts of information could be transmitted around the world in relatively short order thanks to development of international commercial air travel, real-time communication was generally limited to telephone conversations between two individuals. An imminent call to action would require substantial real-world logistics, meetings, and preparation, all of which provide significant opportunities for detection and disruption by law enforcement. By comparison, internet forums today provide for near-instant communication between large groups of individuals across the entire world, likely narrowing the window that law enforcement would have to identify and act upon a credible, imminent threat.

At what point does Islamic State recruitment or militant Neo-Nazi organizing on the internet rise to the level of imminent threat? The Supreme Court has not yet decided the issue, many internet businesses have recently begun to take matters into their own hands. Facebook and Youtube have reportedly been more active in policing Islamic State propaganda, while Reddit has taken some steps to remove communities that advocate for rape and violence. Consequently, while the Supreme Court has not yet elected to draw (or redraw) a bright red line in the internet age, many businesses appear to be taking the first steps to draw the line themselves, on their terms.


Fi-ARRR-E & Fury: Why Even Reading the Pirated Copy of Michael Wolff’s New Book Is Probably Copyright Infringement

By Tim Joyce, MJLST EIC-Emeritus

 

THE SITUATION

Lately I’ve seen several Facebook links to a pirated copy of Fire & Fury: Inside the Trump White House, the juicy Michael Wolff expose documenting the first nine months of the President’s tenure. The book reportedly gives deep, behind-the-scenes perspectives on many of Mr. Trump’s most controversial actions, including firing James Comey and accusing President Obama of wiretapping Trump Tower.

 

It was therefore not surprising when Trump lawyers slapped a cease & desist letter on Wolff and his publisher. While there are probably volumes yet to be written about the merits of those claims (in my humble opinion: “sorry, bros, that’s not how defamation of a public figure works”), this blog post deals with the copyright implications of sharing and reading the pirated copy of the book, and the ethical quandaries it creates. I’ll start with the straightforward part.

 

THE APPLICABLE LAW

First, it should almost go without saying that the person who initially created the PDF copy of the 300+ page book broke the law. (Full disclosure: I did click on the Google link, but only to verify that it was indeed the book and not just a cover page. It was. Even including the page with copyright information!) I’ll briefly connect the dots for any copyright-novices reading along:

 

    • Wolff is the “author” of the book, a “literary work” that constitutes an “original works of authorship fixed in any tangible medium of expression” [see 17 USC 102’].
    • As the author, one of his copyrights is to control … well … copying. The US Code calls that “reproduction” [see 17 USC 106].
    • He also gets exclusive right to “display” the literary work “by means of a film, slide, television image, or any other device or process” [see 17 USC 101]. Basically, he controls display in any medium like, say, via a Google Drive folder.
    • Unauthorized reproduction, display, and/or distribution is called “infringement” [see 17 USC 501]. There are several specific exceptions carved into the copyright code for different types of creative works, uses, audiences, and other situations. But this doesn’t fall into one of those exceptions.

 

  • So, the anonymous infringer has broken the law.

 

  • [It’s not clear, yet, whether this person is also a criminal under 17 USC 506, because I haven’t seen any evidence of fraudulent intent or acting “for purposes of commercial advantage or private financial gain.”]

 

Next, anyone who downloads a copy of the book onto their smartphone or laptop is also an infringer. The same analysis applies as above, only with a different starting point. The underlying material’s copyright is still held by Wolff as the author. Downloading creates a “reproduction,” which is still unauthorized by the copyright owner. Unauthorized exercise of rights held exclusively by the author + no applicable exceptions = infringement.

 

Third, I found myself stuck as to whether I, as a person who had intentionally clicked through into the Google Drive hosting the PDF file, had also technically violated copyright law. Here, I hadn’t downloaded, but merely clicked the link which launched the PDF in a new Chrome tab. The issue I got hung up on was whether that had created a “copy,” that is a “material objects … in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” [17 USC 101]

 

Computer reproductions are tricky, in part because US courts lately haven’t exactly given clear guidance on the matter. (Because I was curious — In Europe and the UK, it seems like there’s an exception for temporary virtual copies, but only when incidental to lawful uses.) There’s some debate as to whether it’s infringement if only the computer is reading the file, and for a purpose different than perceiving the artistic expression. (You may remember the Google Books cases…) However, when it’s humans doing the reading, that “purpose of the copying” argument seems to fall by the wayside.

 

Cases like  Cartoon Network v. CSC Holdings have attempted to solve the problem of temporary copies (as when a new browser window opens), but the outcome there (i.e., temporary copies = ok) was based in part on the fact that the streaming service being sued had the right to air the media in question. Their copy-making was merely for the purposes of increasing speed and reducing buffering for their paid subscribers. Here, where the right to distribute the work is decidedly absent, the outcome seems like it should be the opposite. There may be a case out there that deals squarely with this situation, but it’s been awhile since copyright class (yay, graduation!) and I don’t have free access to Westlaw anymore. It’s the best I could do in an afternoon.

 

Of course, an efficient solution here would be to first crack down on the entities and individuals that first make the infringement possible – ISPs and content distributors. The Digital Millennium Copyright Act already gives copyright owners a process to make Facebook take bootleg copies of their stuff down. But that only solves half the problem, in my opinion. We have to reconcile our individual ethics of infringement too.

 

ETHICAL ISSUES, FOR ARTISTS IN PARTICULAR

One of the more troubling aspects of this pirateering that I saw was that the link-shares came from people who make their living in the arts. These are the folks who–rightly, in my opinion–rail against potential “employers” offering “exposure” instead of cold hard cash when they agree to perform. To expect to be paid for your art, while at the same time sharing an illegal copy of someone else’s, is logically inconsistent to me.

 

As a former theater actor and director (read: professional almost-broke person) myself, I can understand the desire to save a few dollars by reading the pirated copy. The economics of making a living performing are tough – often you agree to take certain very-low-paying artistic jobs as loss-leaders toward future jobs. But I have only met a very few of us willing to perform for free, and even fewer who would tolerate rehearsing with the promise of pay only to be stiffed after the performance is done. That’s essentially what’s happening when folks share this bootleg copy of Michael Wolff’s book.

 

I’ve heard some relativistic views on the matter, saying that THIS book containing THIS information is so important NOW, that a little infringement shouldn’t matter. But you could argue that Hamilton, the hit musical about the founding of our nation and government, has equally urgent messages regarding democracy, totalitarianism, individual rights, etc. Should anyone, therefore, be allowed to just walk into the theater and see the show without paying? Should the cast be forced to continue performing even when there is no longer ticket revenue flowing to pay for their efforts? I say that in order to protect justice at all times, we have to protect justice this time.

 

tl;dr

Creating, downloading, and possibly even just viewing the bootleg copy of Michael Wolff’s book linking around Facebook is copyright infringement. We cannot violate this author’s rights now if we expect to have our artistic rights protected tomorrow.

 

Contact Me!

These were just some quick thoughts, and I’m sure there’s more to say on the matter. If you’d like to discuss any copyright issues further, I’m all ears.


Fi-ARRR-E & Fury: Why Even Reading the Pirated Copy of Michael Wolff’s New Book Is Probably Copyright Infringement

By Tim Joyce, MJLST EIC-Emeritus

 

THE SITUATION

Lately I’ve seen several Facebook links to a pirated copy of Fire & Fury: Inside the Trump White House, the juicy Michael Wolff expose documenting the first nine months of the President’s tenure. The book reportedly gives deep, behind-the-scenes perspectives on many of Mr. Trump’s most controversial actions, including firing James Comey and accusing President Obama of wiretapping Trump Tower.

 

It was therefore not surprising when Trump lawyers slapped a cease & desist letter on Wolff and his publisher. While there are probably volumes yet to be written about the merits of those claims (in my humble opinion: “sorry, bros, that’s not how defamation of a public figure works”), this blog post deals with the copyright implications of sharing and reading the pirated copy of the book, and the ethical quandaries it creates. I’ll start with the straightforward part.

 

THE APPLICABLE LAW

First, it should almost go without saying that the person who initially created the PDF copy of the 300+ page book broke the law. (Full disclosure: I did click on the Google link, but only to verify that it was indeed the book and not just a cover page. It was. Even including the page with copyright information!) I’ll briefly connect the dots for any copyright-novices reading along:

 

    • Wolff is the “author” of the book, a “literary work” that constitutes an “original works of authorship fixed in any tangible medium of expression” [see 17 USC 102’].
    • As the author, one of his copyrights is to control … well … copying. The US Code calls that “reproduction” [see 17 USC 106].
    • He also gets exclusive right to “display” the literary work “by means of a film, slide, television image, or any other device or process” [see 17 USC 101]. Basically, he controls display in any medium like, say, via a Google Drive folder.
    • Unauthorized reproduction, display, and/or distribution is called “infringement” [see 17 USC 501]. There are several specific exceptions carved into the copyright code for different types of creative works, uses, audiences, and other situations. But this doesn’t fall into one of those exceptions.

 

  • So, the anonymous infringer has broken the law.

 

  • [It’s not clear, yet, whether this person is also a criminal under 17 USC 506, because I haven’t seen any evidence of fraudulent intent or acting “for purposes of commercial advantage or private financial gain.”]

 

Next, anyone who downloads a copy of the book onto their smartphone or laptop is also an infringer. The same analysis applies as above, only with a different starting point. The underlying material’s copyright is still held by Wolff as the author. Downloading creates a “reproduction,” which is still unauthorized by the copyright owner. Unauthorized exercise of rights held exclusively by the author + no applicable exceptions = infringement.

 

Third, I found myself stuck as to whether I, as a person who had intentionally clicked through into the Google Drive hosting the PDF file, had also technically violated copyright law. Here, I hadn’t downloaded, but merely clicked the link which launched the PDF in a new Chrome tab. The issue I got hung up on was whether that had created a “copy,” that is a “material objects … in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” [17 USC 101]

 

Computer reproductions are tricky, in part because US courts lately haven’t exactly given clear guidance on the matter. (Because I was curious — In Europe and the UK, it seems like there’s an exception for temporary virtual copies, but only when incidental to lawful uses.) There’s some debate as to whether it’s infringement if only the computer is reading the file, and for a purpose different than perceiving the artistic expression. (You may remember the Google Books cases…) However, when it’s humans doing the reading, that “purpose of the copying” argument seems to fall by the wayside.

 

Cases like  Cartoon Network v. CSC Holdings have attempted to solve the problem of temporary copies (as when a new browser window opens), but the outcome there (i.e., temporary copies = ok) was based in part on the fact that the streaming service being sued had the right to air the media in question. Their copy-making was merely for the purposes of increasing speed and reducing buffering for their paid subscribers. Here, where the right to distribute the work is decidedly absent, the outcome seems like it should be the opposite. There may be a case out there that deals squarely with this situation, but it’s been awhile since copyright class (yay, graduation!) and I don’t have free access to Westlaw anymore. It’s the best I could do in an afternoon.

 

Of course, an efficient solution here would be to first crack down on the entities and individuals that first make the infringement possible – ISPs and content distributors. The Digital Millennium Copyright Act already gives copyright owners a process to make Facebook take bootleg copies of their stuff down. But that only solves half the problem, in my opinion. We have to reconcile our individual ethics of infringement too.

 

ETHICAL ISSUES, FOR ARTISTS IN PARTICULAR

One of the more troubling aspects of this pirateering that I saw was that the link-shares came from people who make their living in the arts. These are the folks who–rightly, in my opinion–rail against potential “employers” offering “exposure” instead of cold hard cash when they agree to perform. To expect to be paid for your art, while at the same time sharing an illegal copy of someone else’s, is logically inconsistent to me.

 

As a former theater actor and director (read: professional almost-broke person) myself, I can understand the desire to save a few dollars by reading the pirated copy. The economics of making a living performing are tough – often you agree to take certain very-low-paying artistic jobs as loss-leaders toward future jobs. But I have only met a very few of us willing to perform for free, and even fewer who would tolerate rehearsing with the promise of pay only to be stiffed after the performance is done. That’s essentially what’s happening when folks share this bootleg copy of Michael Wolff’s book.

 

I’ve heard some relativistic views on the matter, saying that THIS book containing THIS information is so important NOW, that a little infringement shouldn’t matter. But you could argue that Hamilton, the hit musical about the founding of our nation and government, has equally urgent messages regarding democracy, totalitarianism, individual rights, etc. Should anyone, therefore, be allowed to just walk into the theater and see the show without paying? Should the cast be forced to continue performing even when there is no longer ticket revenue flowing to pay for their efforts? I say that in order to protect justice at all times, we have to protect justice this time.

 

tl;dr

Creating, downloading, and possibly even just viewing the bootleg copy of Michael Wolff’s book linking around Facebook is copyright infringement. We cannot violate this author’s rights now if we expect to have our artistic rights protected tomorrow.

 

Contact Me!

These were just some quick thoughts, and I’m sure there’s more to say on the matter. If you’d like to discuss any copyright issues further, I’m all ears.


Sex Offenders on Social Media?!

Young Choo, MJLST Staffer

 

A sex offender’s access to social media is problematic nowadays on social media, especially considering the vast amount of dating apps you can use to meet other users. Crimes committed through the use of dating apps (such as Tinder and Grindr) include rape, child sex grooming and attempted murder. These statistics have increased seven-fold in just two years. Although sex offenders are required to register with the State, and individuals can get accesses to each state’s sex offender registry online, there are few laws and regulations designed to combat this specific situation in which minors or other young adults can become victims of sex crimes. A new dating app called “Gastby” was introduced to resolve this situation. When new users sign up for Gatsby, they’re put through a criminal background check, which includes sex offender registries.

Should sex-offenders even be allowed to get access to the social media? Recent Supreme Court case, Packingham v. North Carolina, decided that a North Carolina law preventing sex offenders getting access to a commercial social networking web site is unconstitutional due to the First Amendment’s Free Speech Clause. The Court emphasized the fact that accessing to the social media is vital for citizens in the exercise of First Amendment rights. The North Carolina law was struck down mainly because it wasn’t “narrowly tailored to serve a significant governmental interest,” but the Court noted that this decision does not prevent a State from enacting more specific laws to address and ban certain activity of sex offender on social media.

The new online dating app, Gatsby, cannot be the only solution to the current situation. There are already an estimated 50 million people using Tinder in the world and the users do not have a method of determining whether their matches may be sex offenders. New laws narrowly-tailored to address the situation, perhaps requiring dating apps to do background checks on users or an alternative method to prevent sex offenders from utilizing the dating app, might be necessary to reduce the increasing number of crimes through the dating apps.