Social Media

Congress, Google Clash Over Sex-Trafficking Liability Law

Samuel Louwagie, MJLST Staffer

Should web companies be held liable when users engage in criminal sex trafficking on the platforms they provide? Members of both political parties in Congress are pushing to make the answer to that question yes, over the opposition of tech giants like Google.

The Communications Decency Act was enacted in 1934. In the early 1990s, as the Internet went live, Congress added Section 230 to the act. That provision protected providers of web platforms from civil liability for content posted by users of those platforms. The act states that in order to “promote the continued development of the internet . . . No provider of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That protection, according to the ACLU, “defines Internet culture as we know it.”  

Earlier this month, Congress debated an amendment to Section 230 called the Stop Enabling Sex Traffickers Act of 2017. The act would remove that protection from web platforms that knowingly allow sex trafficking to take place. The proposal comes after the First Circuit Court of Appeals held in March of 2016 that even though Backpage.com played a role in trafficking underage girls, section 230 protected it from liability. Sen. Rob Portman, a co-sponsor of the bill, wrote that it is Congress’ “responsibility to change this law” while “women and children have . . . their most basic rights stripped from them.” And even some tech companies, such as Oracle, have supported the bill.

Google, meanwhile, has resisted such emotional pleas. Its lobbyists have argued that Backpage.com could be criminally prosecuted, and that to remove core protections from internet companies will damage the free nature of the web. Critics, such as New York Times columnist Nicholas Kristof, argue the Stop Enabling Sex Traffickers Act was crafted “exceedingly narrowly to target those intentionally engaged in trafficking children.”

The bill has bipartisan support and appears to be gaining steam. The Internet Association, a trade group including Google and Facebook, expressed a willingness at a Congressional hearing to supporting “targeted amendments” to the Communications Decency Act. Whether Google likes it or not, eventually platforms will be at legal risk if they don’t police their content for sex trafficking.


Faux News vs. Freedom of Speech?

Tyler Hartney, MJLST Staffer

This election season has produced a lot of jokes on social media. Some of the jokes are funny and other jokes lack an obvious punch line. Multiple outlets are now reporting that this fake news may’ve influenced voters in the 2016 presidential election. Both Facebook and Google have made conscious efforts to reduce the appearance of these fake news stories on their sites in attempt to reduce the click bait, and thus the revenue streams, of these faux news outlets. With the expansion of the use of technology and social media, these types of stories become of a relevant circulation to possibly warrant misinformation being spread on a massive level. Is this like screaming “fire” in a crowded theatre? How biased would filtering this speech become? Facebook was blown to shreds by the media when it was found to have suppressed conservative news outlets, but as a private business it had every right to do so. Experts are now saying that the Russian government made efforts to help spread this fake news to help Donald Trump win the presidency.

First, the only entity that cannot place limits on speech is the state. If Facebook or Google chose to filter the news broadcasted on each site, users still do not have a claim against the entity; this would be a considered a private business choice. These faux news outlets circulate stories that have appeared to be, at times, intentionally and willfully misleading. Is this similar to a man shouting “fire” in a crowded theatre? In essence, the man in the aforementioned commonly used hypothetical knows that his statement is false and that it has a high probability of inciting panic, but the general public will not be aware of the validity of his statement and will have no time to check. The second part of that statement is key. The general public would not hypothetically have time to check the validity of the statement. If government were to begin passing regulations and cracking down on the circulation and creation of these hoax news stories, it would have to prove that these stories create a “clear and present danger” that will bring significant troubles that Congress has the right to protect the public from. This standard was created in the Supreme Court’s decision in Schenck v. United States. The government will not likely be capable of banning these types of faux news stories because, while some may consider these stories dangerous, the audience has the capability of validating the content from these untrusted sources.

Even contemplating government action under this circumstance would require the state to walk a fine line with freedom of political expression. What is humorous and what is dangerously misleading? For example, The Onion posted an article entitled “Biden Forges Presidents Signature Executive Order 54723,” clearly this is a joke; however, it holds the potential ability to insight fury from those who might believe it and create a misinformed public that might use this as material information when casting a ballot. This Onion article is not notably different from another post entitled “FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE” published by the Denver Guardian. With the same potential to mislead the public, there wouldn’t really be any identifiable differences between the two stories. This area of gray would make it extremely difficult to methodically stop the production of fake news while ensuring the protection of the comedic parody news. The only way to protect the public from the dangers of these stories that are apparently being pushed on to the American voting public by the Russian government in an attempt to influence election outcomes is to educate the public on how to verify online accounts.


The Best Process for the Best Evidence

Mary Riverso, MJLST Staffer

Social networking sites are now an integral part of American society. Almost everyone and everything has a profile, typically on multiple platforms. And people like to use them. Companies like having direct contact with their customers, media outlets like having access to viewer opinions, and people like to document their personal lives.

However, as the use of social-networking continues to increase in scope, the information placed in the public sphere is playing an increasingly centralized role in investigations and litigation. Many police departments conduct regular surveillance of public social media posts in their communities because these sites have become conduits for crimes and other wrongful behavior. As a result, litigants increasingly seek to offer records of statements made on social media sites as evidence. So how exactly can content from social media be used as evidence? Ira Robbins explores this issue in her article Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-Networking Evidence. The main hurdle is one of reliability. In order to be admitted as evidence, the source of information must be authentic so that a fact-finder may rely on the source and ultimately its content as trustworthy and accurate. However, social media sites are particularly susceptible to forgery, hacking, and alterations. Without a confession, it is often difficult to determine who is the actual author responsible for posting the content.

Courts grapple with this issue – some allow social media evidence only when the record establishes distinctive characteristics of the particular website under Federal Rule of Evidence 901(b)(4), other courts believe authentication is a relatively low bar and as long as the witness testifies to the process by which the record was obtained, then it is ultimately for the jury to determine the credibility of the content. But is that fair? If evidence is supposed to assist the fact-finder in “ascertaining the truth and securing a just determination,” should it not be of utmost importance to determine the author of the content? Is not a main purpose of authentication to attribute the content to the proper author? Social media records may well be the best evidence against a defendant, but without an authorship-centric approach, the current path to their admissibility may not yet be the best process.


Are News Aggregators Getting Their Fair Share of Fair Use?

Mickey Stevens, MJLST Note & Comment Editor

Fair use is an affirmative defense to copyright that permits the use of copyrighted materials without the author’s permission when doing so fulfills copyright’s goal of promoting the progress of science and useful arts. One factor that courts analyze to determine whether or not fair use applies is whether the use is of a commercial nature or if it is for nonprofit educational purposes—in other words, whether the use is “transformative.” Recently, courts have had to determine whether automatic news aggregators can invoke the fair use defense against claims of copyright infringement. An automatic news aggregator scrapes the Internet and republishes pieces of the original source without adding commentary to the original works.

In Spring 2014, MJLST published “Associated Press v. Meltwater: Are Courts Being Fair to News Aggregators?” by Dylan J. Quinn. That article discussed the Meltwater case, in which the United States District Court for the Southern District of New York held that Meltwater—an automatic news aggregator—could not invoke the defense of fair use because its use of copyrighted works was not “transformative.” Meltwater argued that it should be treated like search engines, whose actions do constitute fair use. The court rejected this argument, stating that Meltwater customers were using the news aggregator as a substitute for the original work, instead of clicking through to the original article like a search engine.

In his article, Quinn argued that the Meltwater court’s interpretation of “transformative” was too narrow, and that such an interpretation made an untenable distinction between search engines and automatic news aggregators who function similarly. Quinn asked, “[W]hat if a news aggregator can show that its commercial consumers only use the snippets for monitoring how frequently it is mentioned in the media and by whom? Is that not a different ‘use’?” Well, the recent case of Fox News Network, LLC v. TVEyes, Inc. presented a dispute similar to Quinn’s hypothetical that might indicate support for his argument.

In TVEyes, Fox News claimed that TVEyes, a media-monitoring service that aggregated news reports into a searchable database, had infringed copyrighted clips of Fox News programs. The TVEyes database allowed subscribers to track when, where, and how words of interest are used in the media—the type of monitoring that Quinn argued should constitute a “transformative” use. In a 2014 ruling, the court held that TVEyes’ search engine that displayed clips was transformative because it converted the original work into a research tool by enabling subscribers to research, criticize, and comment. 43 F. Supp. 3d 379 (S.D.N.Y. 2014). In a 2015 decision, the court analyzed a few specific features of the TVEyes service, including an archiving function and a date-time search function. 2015 WL 5025274 (S.D.N.Y. Aug. 25, 2015). The court held that the archiving feature constituted fair use because it allowed subscribers to detect patterns and trends and save clips for later research and commentary. However, the court held that the date-time search function (allowing users to search for video clips by date and time of airing) was not fair use. The court reasoned that users who have date and time information could easily obtain that clip from the copyright holder or licensing agents (e.g. by buying a DVD).

While the court’s decision did point out that the video clip database was different in kind from that of a collection of print news articles, the TVEyes decisions show that the court may now be willing to allow automatic news aggregators to invoke the fair use defense when they can show that their collection of print news articles enables consumers to track patterns and trends in print news articles for research, criticism, and commentary. Thus, the TVEyes decisions may lead the court to reconsider the distinction between search engines and automatic news aggregators established in Meltwater that puts news aggregators at a disadvantage when it comes to fair use.


The Limits of Free Speech

Paul Overbee, MJLST Editor

A large portion of society does not put much thought into what they post on the internet. From tweets and status updates to YouTube comments and message board activities, many individuals post on impulse without regard to how their messages may be interpreted by a wider audience. Anthony Elonis is just one of many internet users that are coming to terms with the consequences of their online activity. Oddly enough, by posting on Facebook Mr. Elonis took the first steps that ultimately led him to the Supreme Court. The court is now considering whether the posts are simply a venting of frustration as Mr. Elonis claims, or whether the posts constitute a “true threat” that will direct Mr. Elonis directly to jail.

The incident in question began a week after Tara Elonis obtained a protective order against her husband. Upon receiving the order, Mr. Elonis posted to Facebook, “Fold up your PFA [protection-from-abuse order] and put it in your pocket […] Is it thick enough to stop a bullet?” According the Mr. Elonis, he was trying to emulate the rhyming styles of the popular rapper Eminem. At a later date, an FBI agent visited Mr. Elonis regarding his threatening posts about his wife. Soon after the agent left, Mr. Elonis again returned to Facebook to state “Little agent lady stood so close, took all the strength I had not to turn the [expletive] ghost. Pull my knife, flick my wrist and slit her throat.”

Due to these posts, Mr. Elonis was sentenced to nearly four years in federal prison, and Elonis v. United States is now in front of the Supreme Court. Typical state statutes define these “true threats” without any regard to whether the speaker actually intended to cause such terror. For example, Minnesota’s “terroristic threats” statute includes “reckless disregard of the risk of causing such terror.” Some states allow for a showing of “transitory anger” to overcome a “true threat” charge. This type of defense arises where the defendant’s actions are short-lived, have no intent to terrorize, and clearly are tied to an inciting event that caused the anger.

The Supreme Court’s decision will carry wide First Amendment implications for free speech rights and artistic expression. A decision that comes down harshly on Mr. Elonis may have the effect of chilling speech on the internet. The difference between a serious statement and one that is joking many times depends on the point of view of the reader. Many would rather stop their posting on the internet instead of risk having their words misinterpreted and charges brought. On the other hand, if the Court were to look towards the intent of Mr. Elonis, then “true threat” statutes may lose much of their force due to evidentiary issues. A decision in favor of Mr. Elonis may lead to a more violent internet where criminals such as stalkers have a longer leash in which to persecute their victims. Oral argument on the case was held on December 1, 2014, and a decision will be issued in the near future.


An Authorship-Centric Approach to the Authentication of Social-Networking Evidence

Sen “Alex” Wang, MJLST Staff Member

In Volume 13 Issue 1 of the Minnesota Journal of Law, Science & Technology, Ira P. Robbins called for special attention for social-networking evidence used in civil and criminal litigation and proposed an authorship-centric approach to the authentication of such evidence. In recent years, social-networking websites like Facebook, MySpace, and Twitter have become an ingrained part of our culture. However, at least as it appears to Robbins, people are stupid with regard to their online postings since they document their every move no matter how foolish or incriminating on social-networking sites. The lives and careers of not only ordinary citizens, but also lawyers, judges, and even Congress members have been damaged by their own social-networking postings.

Social-networking sites are designed to facilitate interpersonal relationships and information exchanges, but they have also been used to harass, intimidate, and emotionally abuse or bully others. With no effective check on fake accounts or false profiles, the anonymity of social-networking sites permits stalkers and bullies to take their harmful conduct above and beyond traditional harrying. The infamous Lori Drew and Latisha Monique Frazier cases provide excellent examples. Moreover, hackers and identity thieves have also taken advantages of the personal information posted on social-networking sites. Thus, Robbins argued that the growth in popularity of social-networking sites and the rising number of fake accounts and incidents of hacking signal that information from social-networking sites will begin to play a central role in both civil and criminal litigation.

Often unbeknownst to the social-networking user, postings leave a permanent trail that law-enforcement agents and lawyers frequently rely upon in crime solving and trial strategy. Robbins argued that the ease with which social-networking evidence can be altered, forged, or posted by someone other than the owner of the account should raise substantial admissibility concerns. Specifically, Robbins stated that social-networking postings are comparable to postings on websites rather than e-mails. Thus, the authentication of social-networking evidence is the critical first step to ensuring that the admitted evidence is trustworthy and, ultimately, that litigants receive a fair and just trial.

Robbins, however, further argued that the current judicial approaches to authentication of such evidence have failed to require rigorous showings of authenticity despite the demonstrated unreliability of information on social-networking sites. In the first approach, the court effectively shirks its gate-keeping function, deflecting all reliability concerns associated with social-networking evidence to the finder of fact. Under the second approach, the court authenticates a social-networking posting by relying solely on testimony of the recipient. The third approach requires testimony about who, aside from the owner, can access the social-networking account in question. With the fourth approach, the court focuses on establishing the author of a specific posting but failed to provide a thorough framework.

As a solution, Robbins proposed an authorship-centric approach that instructs courts to evaluate multiple factors when considering evidence from social-networking websites. The factors fall into three categories: account security, account ownership, and the posting in question. Although no one factor in these categories is dispositive, addressing each will help to ensure that admitted evidence possesses more than a tenuous link to its purported author. For account security, the inquiry should include at least the following questions: (1) Does the social-networking site allow users to restrict access to their profiles or certain portions of their profiles? (2)Is the account that was used to post the proffered evidence password protected? (3) Does anyone other than the account owner have access to the account? (4) Has the account been hacked into in the past? (5) Is the account generally accessed from a personal or a public computer? (6) How was the account accessed at the time the posting was made? As to account ownership, a court should address, at a minimum, the following key questions: (1) Who is the person attached to the account that was used to post the proffered evidence? (2) Is the e-mail address attached to the account one that is normally used by the person? (3) Is the alleged author a frequent user of the social-networking site in question? Finally, the court should ask at least these questions regarding the posting in question: (1) How was the evidence at issue placed on the social-networking site? (2) Did the posting at issue come from a public or a private area of the social-networking website? (3) How was the evidence at issue obtained from the website?

This authorship-centric approach properly shifts a court’s attention from content and account ownership to authorship, it underscores the importance of fairness and accuracy in the outcome of judicial proceedings that involve social-networking evidence. In addition, it fit within the current circumstantial-evidence authentication framework set out by Federal Rules of Evidence 901(b)(4) and will not require the courts to engage in a more exhaustive inquiry than is already required for other types of evidence.


Open Patenting, Innovation, and the Release of the Tesla Patents

Blake Vettel, MJLST Staff Member

In Volume 14 Issue 2 of the Minnesota Journal of Law, Science & Technology, Mariateresa Maggiolino and Marie Lillá Montagnani proposed a framework for standardized terms and conditions for Open Patenting. This framework set forth a standard system for patent holders to license their patents in order to encourage open innovation, in a way that was easy to administer for patent holders of all sizes. Maggiolino and Montagnani argued for an open patenting scheme in which the patent owner would irrevocably spread their patented knowledge worldwide, based on non-exclusive and no-charge licensing. Futhermore, the licensing system would be centrally operated online and allow the patentee to customize certain clauses in the licensing agreement; while maintaining a few compulsory clauses such as a non-assertion pledge that would keep the license open.

On June 12, 2014 Elon Musk, CEO of Tesla Motors, shocked the business world by announcing via blog post that “Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.” Musk described his reasoning for opening Tesla’s patents for use by others as a way to encourage innovation and growth within the electric car market, and depicted Tesla’s true competition as gasoline cars instead of electric competitors. By allowing use of their patented technology, Tesla hopes to develop the electric car market and encourage innovation. Some commentators have been skeptical about the altruistic motive behind releasing the patents, arguing that it may in fact be a move intended to entice other electric car manufacturers to produce cars that are compatible with Tesla’s patented charging stations in an effort to develop the network of stations around the country.

However, Musk did not unequivocally release these patents; instead he conditioned their subsequent use upon being in “good faith.” What constitutes a good faith use of Tesla’s technology is not clear, but Tesla could have instead opted for a standardized licensing system as proposed by Maggiolino and Montagnani. A clear standardized licensing scheme with compulsory clauses designed to encourage free movement of patented technology and spur innovation may have been more effective in promoting use of Tesla’s patents. An inventor who wants to use Tesla’s patents may be hesitant under Musk’s promise not to initiate lawsuits, where he could be much more confident of his right to use the patented technology under a licensing agreement. The extent to which Tesla’s patents will be used and their effect on the car market and open innovation is yet to be seen, as is the true value of Tesla’s open innovation.


Anti-Cyberbullying State Statutes Should Prompt a Revisiting of the Communications Decency Act

Nia Chung, MJLST Staff

Cyberbullying comes in varying forms. Online outlets with user identification features such as Facebook and MySpace give third party attackers a platform to target individuals but remain identifiable to the victim. The transparency of identification provided on these websites allows victims the ability of possible redress without involving the Internet Service Providers (ISPs).

In February 2014, Bryan Morben published an article on cyberbullying in volume 15.1 of the Minnesota Journal of Law, Science and Technology. In that article Mr. Morben wrote that Minnesota’s new anti-cyberbullying statute, the “Safe and Supportive Minnesota Schools Act” H.F. 826 would “reconstruct the Minnesota bullying statute and would provide much more guidance and instruction to local schools that want to create a safer learning environment for all.” Mr. Morben’s article analyzes the culture of cyberbullying and the importance of finding a solution to such actions.

Another form of cyberbullying has been emerging, however, and state initiatives such as the Safe and Supportive Minnesota Schools Act may prompt Congress to revisit current, outdated, federal law. This form of cyberbullying occurs on websites that provide third parties the ability to hide behind the cloak of anonymity to escape liability for improper actions, like 4chan and AOL.

On September 22, 2014, British actress Emma Watson delivered a powerful U.N. speech about women’s rights. Less than 24 hours later, a webpage titled “Emma You Are Next” appeared, displaying the actress’s face next to a countdown, suggesting that Ms. Watson would be targeted this Friday. The webpage was stamped with the 4chan logo, the same entity that is said to have recently leaked celebrity photos of actresses including Jennifer Lawrence, this past summer. On the same website, one anonymous member responded to Ms. Watson’s speech by stating “[s]he makes stupid feminist speeches at UN, and now her nudes will be online.” Problematically, the law provides no incentive for such ISPs to remove such defamatory content because they are barred from liability by a federal statute. The Communications Decency Act, 47 U.S.C. § 230, provides, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Essentially, this provision provides ISPs immunity from tort liability for content or information generated on a user-generated website. Codified in 1996, initially to regulate pornographic material, the statute added sweeping protection for ISPs. However, 20 years ago, the internet was relatively untouched and had yet to realize its full potential.

Courts historically have applied Section 230 broadly and have prevented ISPs from being held liable for cyberbullying actions brought from victims of cyberbullying on its forum. For example, the Ninth Circuit upheld CDA immunity for an ISP for distributing an email to a listserv who posted an allegedly defamatory email authored by a third party. The Fourth Circuit immunized ISPs even when they acknowledged that the content was tortious. The Third Circuit upheld immunity for AOL against allegations of negligence because punishing the ISP for its third party’s role would be “actions quintessentially related to a publisher’s role.” Understandably, the First Amendment provides the right to free exchange of information and ideas, which gives private individuals the right to anonymous speech. We must ask, however, where the line must be drawn when anonymity serves not as a tool to communicate with others in a public forum but merely as a tool to bring harm to individuals, their reputations and their images.

In early April of this year, the “Safe and Supportive Minnesota Schools Act was approved and officially went into effect. Currently, http://www.cyberbullying.us/Bullying_and_Cyberbullying_Laws.pdf have anti-cyberbullying statutes in place, demonstrating positive reform in keeping our users safe in a rapidly changing and hostile online environment. Opinions from both critics and advocates of the bill were voiced through the course of the bill’s passing, and how effectively Minnesota will apply its cyberbullying statute remains to be seen. A closer look at the culture of cyberbullying, as is discussed in Mr. Morben’s article, and the increasing numbers of anti-cyberbullying state statutes, however, may prompt Congress to revisit Section 230 of the Communications Decency Act, to at least modestly reform ISP immunity and give cyber-attacks victims some form of meaningful redress.


The Importance of Appropriate Identification Within Social Networking

by Shishira Kothur, UMN Law Student, MJLST Staff

Social networking has become a prominent form of communication and expression for society. Many people continue to post and blog about their personal lives, believing that they are hidden by separate account names. This supposed anonymity gives a false sense of security, as members of society post and upload incriminating and even embarrassing information about themselves and others. This information, while generally viewed by an individual’s 200 closest friends, is has also become a part of the courtroom.

This unique issue is further explained in Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-
Networking Evidence
, Volume 13, Issue 1 of the Minnesota Journal of Law, Science and Technology. Professor Ira P. Robbins emphasizes that since social media provides an easy outlet for wrongful behavior, it will inevitably find its way as evidence in litigation. Her article focuses on the courts’ efforts to authenticate the evidence that is produced from Facebook, Twitter and other social media. Very few people take care to set appropriate privacy settings. The result from this practice is an easy way for anyone to find important, personal information, which they can use to hack accounts, submit their own postings under a different name, and incriminate others. Similarly, the creation of fake accounts is a prominent tool to harass and bully individuals to the point of disastrous and suicidal effects. With results such as untimely deaths and inappropriate convictions, the method of proving the authorship of such postings becomes a critical step when collecting evidence.

Professor Robbins comments that currently a person can be connected to and subsequently lawfully responsible for a posting without appropriate proof that the posting is, in fact, theirs. The article critiques the current method the court applies to identifying these individuals, claiming that there is too much emphasis on testimonials of current access, potential outside access, and other various factors. It proposes a new method of assigning authorship to the specific item instead of the account holder. She suggests a specific focus on the type of evidence when applying Federal Rule of Evidence 901(b)(4), which will raise appropriate questions such as the account ownership, security, and the overall posting that is related to the suit. The analysis thoroughly explains how this new method will provide sufficient support between the claims and the actual author. As social media continues to grow, so do the opportunities to hack, mislead, and ultimately cause harm. This influx of information needs to be filtered well in order for the courts to find the truth and serve justice to the right person.


Anti-Cyberbullying Efforts Should Focus on Everyday Tragedies

by Alex Vlisides, UMN Law Student, MJLST Staff

Cyberbullying. It seems every few weeks or months, another story surfaces in the media with the same tragic narrative. A teenager was bullied, both at school and over the internet. The quiet young kid was the target of some impossibly cruel torment by their peers. Tragically, the child felt they had nowhere to turn, and took their own life.

Most recently, a 12 year old girl from Lakeland, FL, named Rebecca Ann Sedwick jumped to her death from the roof of a factory after being bullied online for months by a group of 15 girls. The tragedy has spurred the same news narrative as the many before, and the same calls for inadequate action. Prosecute the bullies or their parents. Blame the victim’s parents for not caring enough. Blame the school for not stepping in.

News media’s institutional bias is to cover the shocking story. The problem is that when considering policy changes to help the huge number of kids who are bullied online, these tragic stories may be the exact wrong cases to consider. Cyberbullying is not an issue that tragically surfaces every few months like a hurricane or a forest fire. It goes on every day, in virtually every middle school and high school in the country. Schools need policies crafted not just to prevent the worst, but to make things better each day.

It is incredibly important to remember students like Sedwick. But to address cyberbullying, it may be just as important to remember the more common effects of bullying: the student who stops raising their hand in class or quits a sports team or fears even going on social media sites. These things should be thought of not as potential warning signs of a tragedy, but as small tragedies themselves.

The media will never run headlines on this side of bullying. This means that policy makers and those advocating for change must correct for this bias, changing the narrative and agenda of cyberbullying to include the common tragedies. The issue is complex, emotional and ever-changing. Though it may not make for breaking news, meaningful change will honor students like Rebecca Ann Sedwick, while protecting students who continue to face cyberbullying every day.