Privacy

Privacy, Public Facebook Posts, and the Medicalization of Everything

Peter J. Teravskis, MD/JD Candidate, MJLST Staffer

Medicalization is “a process by which human problems come to be defined and treated as medical problems.” Medicalization is not a formalized process, but is instead “a social meaning embedded within other social meanings.” As the medical domain has expanded in recent years scholars have begun to point to problems with “over-medicalization” or “corrupted medicalization.” Specifically, medicalization is used to describe “the expansion of medicine in people’s lives.” For example, scholars have problematized the medicalization of obesity, shynesshousing, poverty, normal aging, and even dying, amongst many others. The process of medicalization has become so pervasive in recent years that various sociologists have begun to discuss it as the medicalization “of everyday life,” “of society,”  “of culture,” of the human condition, and “the medicalization of everything”—i.e. turning all human difference into pathology. Similarly, developments in “technoscientific biomedicine” have led scholars to blur the line of what is exclusively “medical” into a broader process of “biomedicalization.”

Medicalization does not carry a valence of “good” or “bad” per se: medicalization and demedicalization can both restrict and expand personal liberties. However, when everyday living is medicalized there are many attendant problems. First, medicalization places problems outside a person’s control: rather than the result of choice, personality, or character, a medicalized problem is considered biologically preordained or “curable.” Medicalized human differences are no longer considered normal; therefore, “treatment” becomes a “foregone conclusion.” Because of this, companies are incentivized to create pharmacological and biotechnological solutions to “cure” the medicalized problem. From a legal perspective, Professor Adele E. Clarke and colleagues note that through medicalization, “social problems deemed morally problematic . . . [are] moved from the professional jurisdiction of the law to that of medicine.” This process is referred to, generally, as the “medicalization of deviance.” Further, medicalization can de-normalize aspects of the human condition and classify people as “diseased.”

Medicalization is important to the sociological study of social control. Social control is defined as the “mechanisms, in the form of patterns of pressure, through which society maintains social order and cohesion.” Thus, once medicalized, an illness is subject to control by medicinal interventions (drugs, surgery, therapy, etc.) and a sick people are expected to take on the “sick role” whereby they become the subjects of physicians’ professional control. A recent example of medical social control is the social pressure to engage in hygienic habits, precautionary measures, and “social distancing” in response to the novel coronavirus, COVID-19. The COVID-19 pandemic is an expressly medical problem; however, when normal life, rather than a viral outbreak, is medicalized, medical social control becomes problematic. For example, the sociologist Peter Conrad argues that medical social control can take the form of “medical surveillance.” He states that “this form of medical social control suggests that certain conditions or behaviors become perceived through a ‘medical gaze’ and that physicians may legitimately lay claim to all activities concerning the condition” (quoting Michel Foucault’s seminal book The Birth of the Clinic).

The effects of medical social control are amplified due to the communal nature of medicine and healthcare, leading to “medical­legal hybrid[]” social control and, I argue, medical-corporate social control. For example, employers and insurers have interests in encouraging healthful behavior when it reduces members’ health care costs. Similarly, employers are interested in maximizing healthy working days, decreasing worker turnover, and maximizing healthy years, thus expanding the workforce. The State has similar interests, as well as interests in reducing end-of-life and old age medical costs. At first glance, this would seem to militate against overmedicalization. However, modern epidemiological methods have revealed the long term consequences of untreated medical problems. Thus, medicalization may result in the diversion of health care dollars towards less expensive preventative interventions and away from more expensive therapy that would help later in life.

An illustrative example is the medicalization of obesity. Historically, obesity was not considered a disease but was a socially desirable condition: demonstrating wealth; the ability to afford expensive, energy-dense foods; and a life of leisure rather than manual labor. Changing social norms, increased life expectancy, highly sensitive biomedical technologies for identifying subtle metabolic changes in blood chemistry, and population-level associations between obesity and later-life health complications have contributed to the medicalization of this conditions. Obesity, unlike many other conditions, it not attributable to a single biological process, rather, it is hypothesized to result from the contribution of multiple genetic and environmental factors. As such, there is no “silver bullet” treatment for obesity. Instead, “treatment” for obesity requires profound changes reaching deep into how a patient lives her life. Many of these interventions have profound psychosocial implications. Medicalized obesity has led, in part, to the stigmatization of people with obesity. Further, medical recommendations for the treatment of obesity, including gym membership, and expensive “health” foods, are costly for the individual.

Because medicalized problems are considered social problems affecting whole communities, governments and employers have stepped in to treat the problem. Politically, the so-called “obesity epidemic” has led to myriad policy changes and proposals. Restrictions designed to combat the obesity epidemic have included taxes, bans, and advertising restrictions on energy-dense food products. On the other hand, states and the federal government have implemented proactive measures to address obesity, for example public funds have been allocated to encourage access to and awareness of “healthy foods,” and healthy habits. Further, Social Security Disability, Medicare and Medicaid, and the Supplemental Nutrition Assistance Program have been modified to cope with economic and health effects of obesity.

Other tools of control are available to employers and insurance providers. Most punitively, corporate insurance plans can increase rates for obese employees.  As Abby Ellin, writing for Observer, explained “[p]enalizing employees for pounds is perfectly legal [under the Affordable Care Act]” (citing a policy brief published in the HealthAffairs journal). Alternatively, employers and insurers have paid for or provided incentives for gym memberships and use, some going so far as to provide exercise facilities in the workplace. Similarly, some employers have sought to modify employee food choices by providing or restricting food options available in the office. The development of wearable computer technologies has presented another option for enforcing obesity-focused behavioral control. Employer-provided FitBits are “an increasingly valuable source of workforce health intelligence for employers and insurance companies.” In fact, Apple advertises Apple Watch to corporate wellness divisions and various media outlets have noted how Apple Watch and iPhone applications can be used by employers for health surveillance.

Indeed, medicalization as a pretense for technological surveillance and social control is not exclusively used in the context of obesity prevention. For instance, the medicalization of old age has coincided with the technological surveillance of older people. Most troubling, medicalization in concert with other social forces have spawned an emerging field of technological surveillance of mental illness. Multiple studies, and current NIH-funded research, are aimed at developing algorithms for the diagnosis of mental illness based on data mined from publicly accessible social media and internet forum posts. This process is called “social media analysis.” These technologies are actively medicalizing the content of digital communications. They subject peoples’ social media postings to an algorithmic imitation of the medical gaze, whereby, “physicians may legitimately lay claim to” those social media interactions.  If social media analysis performs as hypothesized, certain combinations of words and phrases will constitute evidence of disease. Similar technology has already been coopted as a mechanism of social control to detect potential perpetrators of mass shootings. Policy makers have already seized upon the promise of medical social media analysis as a means to enforce “red flag” laws. Red flag laws “authorize courts to issue a special type of protection order, allowing the police to temporarily confiscate firearms from people who are deemed by a judge to be a danger to themselves or to others.” Similarly, it is conceivable that this type of evidence will be used in civil commitment proceedings. If implemented, such programs would constitute a link by which medical surveillance, under the banner of medicalization, could be used as grounds to deprive individuals of civil liberty, demonstrating an explicit medical-legal hybrid social control mechanism.

What protections does the law offer? The Fourth Amendment protects people from unreasonable searches. To determine whether a “search” has occurred courts ask whether the individual has a “reasonable expectation of privacy” in the contents of the search. Therefore, whether a person had a reasonable expectation of privacy in publicly available social media data is critical to determining whether that data can be used in civil commitment proceedings or for red flag law protective orders.

Public social media data is, obviously, public, so courts have generally held that individuals have no reasonable expectation of privacy in its contents. By contrast, the Supreme Court has ruled that individuals have a reasonable expectation of privacy in the data contained on their cell phones and personal computers, as well as their personal location data (cell-site location information) legally collected by third party cell service providers. Therefore, it is an open question how far a person’s reasonable expectation of privacy extends in the case of digital information. Specifically, when public social media data is used for medical surveillance and making psychological diagnoses the legal calculation may change. One interpretation of the “reasonable expectation of privacy” test argues that it is an objective test—asking whether a reasonable person would actually have a privacy interest. Indeed, some scholars have suggested using polling data to define the perimeter of Fourth Amendment protections. In that vein, an analysis of the American Psychiatric Association’s “Goldwater Rule” is illustrative.

The Goldwater Rule emerged after the media outlet “Fact” published psychiatrists’ medical impressions of 1964 presidential candidate Barry Goldwater. Goldwater filed a libel suit against Fact, and the jury awarded him $1.00 in compensatory damages and $75,000 in punitive damages resulting from the publication of the psychiatric evaluations. None of the quoted psychiatrists had met or examined Goldwater in person. Subsequently, concerned primarily about the inaccuracies of “diagnoses at a distance,” the APA adopted the Goldwater Rule, prohibiting psychiatrists from engaging in such practices. It is still in effect today.

The Goldwater Rule does not speak to privacy per se, but it does speak to the importance of personal, medical relationships between psychiatrists and patients when arriving at a diagnosis. Courts generally treat those types of relationships as private and protect them from needless public exposure. Further, using social media surveillance to diagnose mental illness is precisely the type of diagnosis-at-a-distance that concerns the APA. However, big-data techniques promise to obviate the diagnostic inaccuracies the 1960s APA was concerned with.

The jury verdict in favor of Goldwater is more instructive. While the jury found only nominal compensatory damages, it nevertheless chose to punish Fact magazine. This suggests that the jury took great umbrage with the publication of psychiatric diagnoses, even though they were obtained from publicly available data. Could this be because psychiatric diagnoses are private? The Second Circuit, upholding the jury verdict, noted that running roughshod over privacy interests is indicative of malice in cases of libel. Under an objective test, this seems to suggest that subjecting public information to the medical gaze, especially the psychiatrist’s gaze, unveils information that is private. In essence, applying big-data computer science techniques to public posts unveils or reveals private information contained in the publicly available words themselves. Even though the public social media posts are not subject to a reasonable expectation of privacy, a psychiatric diagnosis based on those words may be objectively private. In sum, the medicalization and medical surveillance of normal interactions on social media may create a Fourth Amendment privacy interest where none previously existed.


“Open Up It’s the Police! . . . and Jeff Bezos?”

Noah Cozad, MJLST Staffer

Amazon’s Ring company posted a series of Instagram posts around Halloween, including a video of children trick or treating, and statistics about how many doorbells were rang on the night.  What was probably conceived as a cute marketing idea, quickly received backlash. It turns out people were not enamored by the thought of Ring watching their children trick or treat.  This is not the first time Ring’s ads have drawn criticism. In June of this year, social media users noticed that Ring was using images and footage from their cameras in advertisements. The posts included pictures of suspects, as well as details of their alleged crimes. Ring called these “Community Alerts.” Customers, it seems, have agreed to exactly this use of data. In Ring’s terms of service agreement, customers grant Ring the ability to “use, distribute, store .  . . and create derivative works from such Content that you share through our Service.”

The backlash to Ring’s ads gets to a deeper concern about the Amazon company and its technology: the creation of a massive, privately owned surveillance network. Consumers have good reason to be wary of this. It’s not fully understood what exactly Ring does with the images and videos this network creates. Earlier this year, it was reported that Ring allegedly gave their Ukrainian R&D team unlimited access to every video and image created by any Ring camera. And Ring allegedly allowed engineers and executives unlimited access to some customers cameras as well, including Ring’s security cameras made for indoor use. Ring has denied these allegations. There are not many specifics, but the company is said to have “minimum security standards” in general, and appears not to encrypt the storage of customer data. Though data is now encrypted “in transit.”

The legal and civil rights concerns from this technology all seem to come to a head with Ring’s partnerships with local police departments. Six hundred plus police departments, including the Plymouth and Rochester departments, have partnered with Ring. Police departments encourage members of their community to buy Ring, and Ring gives police forces potential access to camera footage. The footage is accessed through a request to the customer, which can be denied, otherwise, police usually require a warrant to force Ring to hand over the footage. California departments though allege they have been able to sidestep the customer, and simply threaten Ring with a subpoena for the footage. If true, there is effectively little stopping Ring from sharing footage with police. Ring has claimed to be working hard to protect consumers privacy but has not answered exactly how often they give police footage without the approval of the customer or a warrant.

How legislatures and regulators handle this massive surveillance network and its partnerships with law enforcement is up in the air at this point. Despite continual backlash to their services, and 30 civil rights groups speaking out against Ring’s corporate practices, there has been little movement on the Federal level it seems, besides a letter from Senator Markey (D-Mass) to Amazon demanding more information on their services. Recently, Amazon replied to Senator Markey, which shed some light on how police can receive and use the data. Amazon stated that police can request 12 hours of footage from any device within a 0.5 mile radius of the crime. Amazon further stated that it does not require police to meet any evidentiary standard before asking for footage.

Despite the relative lack of governmental action currently, it is almost assured some level of government will act on these issues in the near future. For now, though, Ring continues to expand its network, and along with it, concerns over due process, privacy, and law enforcement overreach.


Forget About Quantum Computers Cracking Your Encrypted Data, Many Believe End-to-End Encryption Will Lose Out as a Matter of Policy

Ian Sannes, MJLST Staffer

As reported in Nature, Google recently announced they finally achieved quantum supremacy, which is the point when computers that work based on the spin of qubits, rather than how all conventional computers work, are finally able to solve problems faster than conventional computers. However, using quantum computers is not a threat to encryption any time soon according to John Preskill, who coined the term “quantum supremacy,” rather such theorized uses remain many years out. Furthermore, the question remains whether quantum computers are even a threat to encryption at all. IBM recently showcased one way to encrypt data that is immune to the theoretical cracking ability of future quantum computers. It seems that while one method of encryption is theoretically prone to attack by quantum computers, the industry will simply adopt methods that are not prone to such attacks when it needs to.

Does this mean that end-to-end encryption methods will always protect me?

Not necessarily. Stewart Baker opines there are many threats to encryption such as homeland security policy, foreign privacy laws, and content moderation, which he believes will win out over the right to have encrypted private data.

The highly-publicized efforts of the FBI in 2016 to try to force Apple to unlock encryption on an iPhone for national security reasons ended in the FBI dropping the case when they hired a third party who was able to crack the encryption. This may seem like a win for Silicon Valley’s historically pro-encryption stance but foreign laws, such as the UK’s Investigatory Powers Act, are opening the door for government power in obtaining user’s digital data.

In October of 2019 Attorney General Bill Barr requested that Facebook halt its plans to implement end-to-end encryption on its messaging services because it would prevent investigating serious crimes. Zuckerberg, the CEO of Facebook, admitted it would be more difficult to identify and remove harmful content if such an encryption was implemented, but has yet to implement the solution.

Some believe legislators may simply force software developers to create back doors to users’ data. Kalev Leetaru believes content moderation policy concerns will allow governments to bypass encryption completely by forcing device manufacturers or software companies to install client-side content-monitoring software that is capable of flagging suspicious content and sending decrypted versions to law enforcement automatically.

The trend seems to be headed in the direction of some governmental bypass of conventional encryption. However, just like IBM’s quantum-proof encryption was created to solve a weakness in encryption, consumers will likely find another way to encrypt their data if they feel there is a need.


Pacemakers, ICDs, and ICMs – Oh My! Implantable Heart Detection Devices

Janae Aune, MJLST Staffer

Heart attacks and heart disease kill hundreds of thousands of people in the United States every year. Heart disease affects every person differently based on their genetic and ethnic background, lifestyle, and family history. While some people are aware of their risk of heart problems, over 45 percent of sudden heart cardiac deaths occur outside of the hospital. With a condition as spontaneous as heart attacks, accurate information tracking and reporting is vital to effective treatment and prevention. As in any market, the market for heart monitoring devices is diverse, with new equipment arriving every year. The newest device in a long line of technology is the LINQ monitoring device. LINQ builds on and works with already established devices that have been used by the medical community.

Pacemakers were first used effectively in 1969 when lithium batteries were invented. These devices are surgically implanted under the skin of a patient’s chest and are meant to help control the heartbeat. These devices can be implanted for temporary or permanent use and are usually targeted at patients who experience bradycardia, a slow heart rate. These devices require consistent check-ins by a doctor, usually every three to six months. Pacemakers must also be replaced every 5 to 15 years depending on how long the battery life lasts. These devices revolutionized heart monitoring but involve significant risks with the surgery and potential device malfunctioning.

Implantable cardioverter defibrillators (ICD) are also surgically implanted devices but differ from pacemakers in that they deliver one shock when needed rather than continuous electrode shocks. ICDs are similar to the heart paddles doctors use when trying to stimulate a heart in the hospital – think yelling “charge” and the paddles they use. These devices are used mostly in patients with tachycardia, a heartbeat that is too fast. Implantation of an ICD requires feeding wires through the blood vessels of the heart. A subcutaneous ICD (S-ICD) has been newly developed and gives patients who have structural defects in their heart blood vessels another option of ICDs. Similar to pacemakers, an ICD monitors activity constantly, but will be read only at follow-up appointments with the doctor. ICDs last an average of seven years before the battery will need to be replaced.

The Reveal LINQ system is a newly developed heart monitoring device that records and transmits continuous information to a patient’s doctor at all times. The system requires surgical implantation of a small device known as the insertable cardiac monitor (ICM). The ICM works with another component called the patient monitor, which is a bedside monitor that transmits the continuous information collected by the ICM to a doctor instantly. A patient assistant control is also available which allows the patient to manually mark and record particular heart activities and transmit those in more detail. The LINQ system allows a doctor to track a patient’s heart activity remotely rather than requiring the patient to come in for the history to be examined. Continuous tracking and transmitting allow a patient’s doctor to more accurately examine heart activity and therefore create a more effective treatment approach.

With the development of wearable technology meant to track health information and transmit it to the wearer, the development of devices such as the LINQ system provide new opportunities for technologies to work together to promote better health practices. The Apple Watch series 4 included electrocardiogram monitoring that records heart activity and checks the reading for atrial fibrillation (AFB). This is the same heart activity pacemakers, ICDs, and the LINQ system are meant to monitor. The future capability of heart attack and disease detection and treatment could be massively impacted by the ability to monitor heart behavior in multiple different ways. Between the ability to shock your heart, continuously monitor and transmit information about it, and report to you when your heart rate may be experiencing abnormalities from a watch it seems as if a future of decreased heart problems could be a reality.

With all of these newly developed methods of continuous tracking, it begs the question of how all of that information is protected? Health and heart behavior, which is internal and out of your control, is as personal as information gets. Electronic monitoring and transmission of this data opens it up to cybersecurity targeting. Cybersecurity and data privacy issues with these devices have started to be addressed more fully, however the concerns differ depends on which implantable device a patient has. Vulnerabilities have been identified with ICD devices which would allow an unauthorized individual to access and potentially manipulate the device. Scholars have argued that efforts to decrease vulnerabilities should be focused on protecting the confidentiality, integrity, and availability of information transmitted by implantable devices. The FDA has indicated that the use of a home monitor system could decrease the potential vulnerabilities. As the benefits from heart monitors and heart data continue to grow, we need to be sure that our privacy protections grow with it.


Wearable, Shareable, Terrible? Wearable Technology and Data Protection

Alex Wolf, MJLST Staffer

You might consider the first wearable technology of the modern-day to be the Sony Walkman, which celebrates its 40th anniversary this year. After the invention of Bluetooth 1.0 in 2002, commercial competitors began to realize the vast promise that this emergent technology afforded. Fifteen years later, over 265 million wearable tech devices are sold annually. It looks to be a safe bet that this trend will continue.

A popular subset of wearable technology is the fitness tracker. The user attaches the device to themselves, usually on their wrist, and it records their movements. Lower-end trackers record basics like steps taken, distance walked or run, and calories burned, while the more sophisticated ones can track heart rate and sleep statistics (sometimes also featuring fun extras like Alexa support and entertainment app playback). And although this data could not replace the care and advice of a healthcare professional, there have been positive health results. Some people have learned of serious health problems only once they started wearing a fitness tracker. Other studies have found a correlation between wearing a FitBit and increased physical activity.

Wearable tech is not all good news, however; legal commentators and policymakers are worried about privacy compromises that result from personal data leaving the owner’s control. The Health Insurance Portability and Protection Act (HIPAA) was passed by Congress with the aim of providing legal protections for individuals’ health records and data if they are disclosed to third parties. But, generally speaking, wearable tech companies are not bound by HIPAA’s reach. The companies claim that no one else sees the data recorded on your device (with a few exceptions, like the user’s express written consent). But is this true?

A look at the modern American workplace can provide an answer. Employers are attempting to find new ways to manage health insurance costs as survey data shows that employees are frequently concerned with the healthcare plan that comes with their job. Some have responded by purchasing FitBits and other like devices for their employees’ use. Jawbone, a fitness device company on its way out, formed an “Up for Groups” plan specifically marketed towards employers who were seeking cheaper insurance rates for their employee coverage plans. The plan allows executives to access aggregate health data from wearable devices to help make cost-benefit determinations for which plan is the best choice.

Hearing the commentators’ and state elected representatives’ complaints, members of Congress have responded; Senators Amy Klobuchar and Lisa Murkowski introduced the “Protecting Personal Health Data Act” in June 2019. It would create a National Task Force on Health Data Protection, which would work to advise the Secretary of Health and Human Services (HHS) on creating practical minimum standards for biometric and health data. The bill is a recognition that HIPAA has serious shortcomings for digital health data privacy. As a 2018 HHS Committee Report noted, “A class of health records that can be subject to HIPAA or not subject to HIPAA is personal health records (PHRs) . . . PHRs not subject to HIPAA . . . [have] no other privacy rules.”  Dena Mendolsohn, a lawyer for Consumer Reports, remarked favorably that the bill is needed because the current framework is “out of date and incomplete.”

The Supreme Court has recognized privacy rights in cell-site location data, and a federal court recognized standing to sue for a group of plaintiffs whose personally identifiable information (PII) was hacked and uploaded onto the Dark Web. Many in the legal community are pushing for the High Court to offer clearer guidance to both tech consumers and corporations on the state of protection of health and other personal data, including private rights of action. Once there is a resolution on these procedural hurdles, we may see firmer judicial directives on an issue that compromises the protected interests of more and more people.

 


Practical Results of Enforcing the GDPR

Sooji Lee, MJLST Staffer

After the enforcement of the European Union’s(“EU”) General Data Protection Regulation (“GDPR”), Facebook was sued by one of its shareholders, Fern Helms, because its share price fell more than “20 percent” in July 27, 2018. This fall in stock price occurred because the investors were afraid of the GDPR’s potential negative impact on the company. This case surprised many people around the world and showed us how GDPR is sensational regulation that could result in lawsuits involving tremendous amounts of money. This post will articulate what has occurred after enforcement of this gigantic world-wide impacting regulation.

Under GDPR, regulated entities (data controllers and data processors) must obtain prior “consent” from their users when they request customers’ personal data. Each member country must establish Data Protection Authority (“DPA”) to comply with the GDPR. This regulation has a broad applicable range, from EU corporations to non-EU corporations that deal with EU citizens’ personal data. Therefore, after the announcement of this regulation, many United States based global technology corporations which conduct some of their business in European countries, such as Google and Facebook, commenced processes to comply with the GDPR. For example, Facebook launched its own website which explains its effort to comply with GDPR.

Surprisingly, however, despite the large-scale preparation, Google and Facebook were sued for breach of the GDPR. According to a report authored by IAPP, thousands of claims were filed within one month the GDPR’s enforcement date, May 25, 2018. This fact implies that it is difficult to abide by GDPR for current internet-based service companies. Additionally, some companies that are not big enough to prepare to comply with the GDPR, such as the Chicago Tribune and the LA Times, temporarily blocked EU users from its website and some decided to terminate its service in the EU.

One interesting fact is that no one has been fined under GDPR yet. A spokesperson for the United Kingdom’s Information Commissioner’s Office commented “we are dealing with the first GDPR cases but it’s too early to speculate about fines or processing bans at this stage.” Experts expect that calculating fines and processing bans could take another six months. These experts foresee that once a decision is rendered, it could set a standard for future cases which may be difficult to change.

The GDPR, a new world-wide impacting regulation, just started its journey toward proper consumer data protection. It seems many of the issues involved with the GDPR are yet to be settled. For now, no expert can make an accurate prediction. Some side-effects seem inevitable. So, it is time to assess the results of the regulation, and keep trying to make careful amendments, such as expanding or restricting the scope of its applicable entities, to adjust for arising problems.


Google Fined for GDPR Non-Compliance, Consumers May Not Like the Price

Julia Lisi, MJLST Staffer

On January 14th, 2019, France’s Data Protection Authority (“DPA”) fined Google 50 million euros in one of the first enforcement actions taken under the EU’s General Data Protection Regulation (“GDPR”). The GDPR, which took effect in May of 2018, sent many U.S. companies scrambling in attempts to update their privacy policies. You, as a consumer, probably had to re-accept updated privacy policies from your social media accounts, phones, and many other data-based products. Google’s fine makes it the first U.S. tech giant to face GDPR enforcement. While a 50 million euro (roughly 57 million dollars) fine may sound hefty, it is actually relatively small compared to maximum fine allowed under the GDPR, which, for Google, would be roughly five billion dollars.

The French fine clarifies a small portion of the uncertainty surrounding GDPR enforcement. In particular, the French DPA rejected Google’s methods for getting consumers to consent to its  Privacy Policy and Terms of Service. The French DPA took issue with the (1) numerous steps users faced before they could opt out of Google’s data collection, (2) the pre-checked box indicating users’ consent, and (3) the inability of users to consent to individual data processes, instead requiring whole cloth acceptance of both Google’s Privacy Policy and Terms of Service.

The three practices rejected by the French DPA are commonplace in the lives of many consumers. Imagine turning on your new phone for the first time and scrolling through seemingly endless provisions detailing exactly how your daily phone use is tracked and processed by both the phone manufacturer and your cell provider. Imagine if you had to then scroll through the same thing for each major app on your phone. You would have much more control over your digital footprint, but would you spend hours reading each provision of the numerous privacy policies?

Google’s fine could mark the beginning of sweeping changes to the data privacy landscape. What once took a matter of seconds—e.g., checking one box consenting to Terms of Service—could now take hours. If Google’s fine sets a precedent, consumers could face another wave of re-consenting to data use policies, as other companies fall in line with the GDPR’s standards. While data privacy advocates may applaud the fine as the dawn of a new day, it is unclear how the average consumer will react when faced with an in-depth consent process.


Access Denied: Fifth Amendment Invoked to Prevent Law Enforcement From Accessing Phone

Hunter Moss, MJLST Staffer 

Mobile phones are an inescapable part of modern life. Research shows that 95% of Americans carry some sort of cell phone, while 77% own smartphones. These devices contain all sorts of personal information, including: call logs, emails, pictures, text messages, and access to social networks. It is unsurprising that the rise of mobile phone use has coincided with an increased interest from law enforcement. Gaining access to a phone could provide a monumental breakthrough in a criminal investigation.

Just as law enforcement is eager to rummage through a suspect’s phone, many individuals hope to keep personal data secret from prying eyes. Smartphone developers use a process called encryption to ensure their consumers’ data is kept private. In short, encryption is a process of encoding data and making it inaccessible without an encryption key. Manufacturers have come under increasing pressure to release encryption keys to law enforcement conducting criminal investigations. Most notable was the confrontation between the F.B.I. and Apple in the wake of the San Bernardino shooting. A magistrate judge ordered Apply to decrypt the shooter’s phone. The tech giant refused, stating that granting the government such a power would undermine the security, and the privacy, of all cellphone users.

The legal theory of a right to privacy has served as the foundation of defenses against government requests for cellphone data. These defenses have been couched in the Fourth Amendment, which is the Constitutional protection guaranteeing security against unreasonable searches. In a ruling that will have profound implications for the future of law enforcement, the Fourth Amendment protection was first extended to mobile phone data when the Supreme Court decided Carpenter v. United States in early 2018. The holding in Carpenter necessitates that warrants are granted during any government investigation seeking to obtain mobile phone records from service providers.

A case from Florida was the most recent iteration of a novel legal theory to shield smartphone users from government encroachment. While the Carpenter decision relied on the Fourth Amendment’s right to privacy, last week’s ruling by the Florida Court of Appeals invokes the Fifth Amendment to bar law enforcement agents from compelling suspects to enter their passcodes and unlocking their phones. This evolution of the Fifth Amendment was grounds for the court to quash a juvenile court’s order for the defendant to reveal his password, which would relinquish the privacy of his phone.

The Fifth Amendment is the constitutional protection from self-incrimination. A suspect in a criminal case cannot be compelled to communicate inculpatory evidence. Because a phone’s passcode is something that we, as the owners, “know,” being forced to divulge the information would be akin to being forced to testify against oneself. While mobile phone users might feel relieved that the development of Fifth Amendment is expanding privacy protections, smartphone owners shouldn’t be too quick to celebrate. While the Fifth Amendment might protect what you “know,” it does not protect what you “are.” Several courts have recognized that the police may unlock a phone using a suspect’s fingerprint or facial recognition software. Given that fingerprinting and mug shots are already routine procedures during an arrest, courts have been reluctant to view unlocking a phone in either manner as an additional burden on suspects.

Technology has seen some incredible advancements over the last few years, particularly in the field of mobile devices. Some have even theorized that our phones are becoming extensions of our minds. The legal framework providing constitutional protections supporting the right to privacy and the right against self-incrimination have trailed the pace of these developments. The new string of cases extending the Fifth Amendment to cellphone searches is an important step in the right direction. As phones have become a ubiquitous part of modern life, containing many of our most private and intimate information, it is clear that the law must continue to evolve to ensure that they are safeguarded from unwanted and unlimited government intrusion.


Carpenter Might Unite a Divided Court

Ellen Levis, MJLST Staffer

 

In late 2010, there was a robbery at a Radio Shack in Detroit. A few days later: a stick up at a T-Mobile store. A few more months, a few more robberies – until law enforcement noticed a pattern and eventually, in April 2011, the FBI arrested four men under suspicion of violating the Hobbs Act (that is, committing robberies that affect interstate commerce.)

One of the men confessed to the crimes and gave the FBI his cell phone number and the numbers of the other participants. The FBI used this information to obtain “transactional records” for each of the phone numbers, which magistrate judges granted under the Stored Communications Act. Based on this “cell-site evidence,” the government charged Timothy Carpenter with a slew of offenses. At trial, Carpenter moved to suppress the government’s cell-site evidence, which included 127 days of GPS tracking and placed his phone at 12,898 locations. The district court denied the motion to suppress; Carpenter was convicted and sentenced to 116 years in prison. The Sixth Circuit affirmed the district court’s decision when Carpenter appealed.

In November 2017, the Supreme Court heard what might be the most important privacy case of this generation. Carpenter v. United States asks the Supreme Court to consider whether the government, without a warrant, can track a person’s movement via geo-locational data beamed out by cell phone.   

Whatever they ultimately decide, the Justices seemed to present a uniquely united front in their questioning at oral arguments, with both Sonia Sotomayor and Neil Gorsuch hinting that warrantless cell-site evidence searches are incompatible with the protections promised by the Fourth Amendment.  

In United States v Jones, 132 S.Ct. 945 (2012), Sotomayor wrote a prescient concurring analysis of the challenge facing the Court as it attempts to translate the Fourth Amendment into the digital age. Sotomayor expressed doubt that “people would accept without complaint the warrantless disclosure to the Government of a list of every Web site they had visited in the last week, or month, or year.” And further, she “would not assume that all information voluntarily disclosed to some member of the public for a limited purpose is, for that reason alone, disentitled to Fourth Amendment protection.”

In the Carpenter oral argument, Sotomayor elaborated on the claims she made in United States v Jones 132 S.Ct. 945 (2012). Similarly, throughout the Carpenter argument, Sotomayor gave concrete examples of how extensively Americans use their cellphones and how invasive cell phone tracking could become. “I know that most young people have the phones in the bed with them. . . I know people who take phones into public restrooms. They take them with them everywhere. It’s an appendage now for some people . . .Why is it okay to use the signals that phone is using from that person’s bedroom, made accessible to law enforcement without probable cause?”

Gorsuch, on the other hand, drilled down on a property-rights theory of the Fourth Amendment, questioning whether a person had a property interest in the data they created. He stated,  “it seems like [the] whole argument boils down to — if we get it from a third party we’re okay, regardless of property interest, regardless of anything else.” And he continued, “John Adams said one of the reasons for the war was the use by the government of third parties to obtain information forced them to help as their snitches and snoops. Why isn’t this argument exactly what the framers were concerned about?”


New Data Protection Regulation in European Union Could Have Global Ramifications

Kevin Cunningham, MJLST Staffer

 

For as long as the commercial web has existed, companies have monetized personal information by mining data. On May 25, however, individuals in the 28 member countries of the European Union will have the ability to opt into the data collection used by so many data companies. The General Data Protection Regulation (GDPR), agreed upon by the European Parliament and Council in April 2016, will replace Data Protection Directive 95/46/ec as the primary law regulating how companies protect personal data of individuals in the European Union. The requirements of the new GDPR aim to create more consistent protection of consumer and personal data across the European Union.

 

Publishers, banks, universities, data and technology companies, ad-tech companies, devices, and applications operating in the European Union will have to comply with the privacy and data protection requirements of the GDPR or be subject to heavy fines (up to four (4) percent of annual global revenue) and penalties. Some of the requirements include: requiring consent of subjects for data processing; anonymizing collected data to protect privacy; providing data breach notifications within 72 hours of the occurrence; safely handling the transfer of data across borders; requiring certain companies to appoint a data protection officer to oversee compliance of the Regulation. Likewise, the European Commission posted on its website that a social network platform will have to adhere to user requests to delete photos and inform search engines and other websites that used the photos that the images should be removed. This baseline set of standards for companies handling data in the EU will better protect the processing and movement of personal data.

 

Companies will have to be clear and concise about the collection and use of personally identifiable information such as name, home address, data location, or IP address. Consumers will have the right to access data that companies store about the individuals, as well as the right to correct false or inaccurate information. Moreover, the GDPR imposes stricter conditions applying to the collection of ‘sensitive data’ such as race, political affiliation, sexual orientation, and religion. The GDPR will still allow businesses to process personally identifiable information without consumer consent for legitimate business interests which include direct marketing through mail, email, or online ads. Still, companies will have to account

 

The change to European law could have global ramifications. Any company that markets goods or service to EU residents will be subject to the GDPR. Many of the giant tech companies that collect data, such as Google and Facebook, look to keep uniform systems and have either revamped or announced a change to privacy settings to be more user-friendly.