Data

I’ve Been Shot! Give Me a Donut!: Linking Vaccine Verification Apps to Existing State Immunization Registries

Ian Colby, MJLST Staffer

The gold rush for vaccination appointments is in full swing. After Governor Walz and many other governors announced an acceleration of vaccine eligibility in their states, the newly eligible desperately sought vaccinations to help the world achieve herd immunity to the SARS-CoV-2 virus (“COVID”) and get back to normal life.

The organization administering a person’s initial dose typically gives the recipient an approximately 4” x 3” card that provides the vaccine manufacturer, the date and location of inoculation, and the Centers for Disease Control (“CDC”) logo. The CDC website does not specify what, exactly, this card is for. Likely reasons include informing the patient about the healthcare they just received, a reminder card for a second dose, or providing batch numbers in case a manufacturing issue arises. Maybe they did it for the ‘Gram. However, regardless of the CDC’s reason for the card, many news outlets have latched onto the most likely future use for them: as a passport to get the post-pandemic party started.

Airlines, sports venues, schools, and donut shops are desperate to return to safe mass gatherings and close contact, without needing to enforce as many protective measures. These organizations, in the short-term, will likely seek assurance of a person’s vaccination status. Aside from the equitable and scientific issues with requiring this assurance, these business will likely get “proof” with these CDC vaccination cards. The cardboard and ink security of these cards rivals social security cards in the “high importance – zero protection” category. Warnings of scammers providing blank CDC cards or stealing the vaccinated person’s name and birthdate hit the web last week (No scammers needed: you can get Missouri’s PDF to print one for free).

With so little security, but with a business-need to reopen the economy to vaccinated folks, businesses and governments have turned to digital vaccine passports. Generically named “digital health passes,” these apps will allow a person to show proof of their vaccination status securely. They “provide a path to reviving the economy and getting Americans back to work and play” according to a New York Times article. “For any such certificate or passport to work, it is going to need two things – access to a country’s official records of vaccinations and a secure method of identifying an individual and linking them to their health record.”

A variety of sources have undertaken development of these digital health passes, both governments and private firms. Israel already provides a nationwide digital proof of vaccination known as a Green Pass. Denmark followed suit with the Coronapas. In addition, a number of private companies and nonprofits are vying to become the preeminent vaccine status app for the world’s smartphones. While governments, such as Israel, have preexisting authority to access immunization and identification records, private firms do not. Private firms would require authorization to access your medical records.

So, in the United States, who would run these apps? Not the U.S. federal government. The Biden Administration unequivocally denied that it would ever require vaccine status checks, and would not keep a vaccination database. The federal government does not need to, though. Most states already manage a digital vaccination database, pursuant to laws authorizing them. Every other state, which doesn’t directly authorize them, still maintains a digital database anyway. These immunization information systems (“IIS”) provide quick access to a person’s vaccination status. A state’s resident can make a request for their vaccination status on myriad vaccinations for free and receive the results via email. Texas and Florida, who made big hubbubs about restricting any use of vaccine passports, both have registries to provide proof of vaccination. So does New York, who has already published an app, known as the Excelsior Pass, that does this for the COVID vaccine. The State’s app pulls information from New York’s immunization registry, providing a quick, simple yes-no result for those requiring proof. The app uses IBM’s blockchain technology, which is “designed to enable the secure verification of health credentials such as test results and vaccination records without the need to share underlying medical and personal information.”

With so many options, consumers of vaccine status apps could become overwhelmed. A vaccinated person may need to download innumerable apps to enter myriad activities. “Fake” apps could ask for additional medical information from the unwary. Private app developers may try to justify continued use of the app after the need for COVID vaccination proof passes.

In this competitive atmosphere, apps that partner with state governments likely provide the best form of digital vaccination verification. These apps have direct approval from the states that are required by law to maintain these vaccination records. They provide some authority to avoid scams. And cooperation to achieve state standardization of these apps may facilitate greater use. States seeking to reopen their economies should authorize digital interfaces with their pre-existing immunization registries. Now that the gold rush for vaccinations has started, the gold rush for vaccine passports is something to keep an eye on.

 


Ways to Lose Our Virtual Platforms: From TikTok to Parler

Mengmeng Du, MJLST Staffer

Many Americans bid farewell to the somewhat rough 2020 but found the beginning of 2021 rather shocking. After President Trump’s followers stormed the Capitol Building on January 6, 2021, major U.S. social media, including Twitter, Facebook, Instagram, and Snapchat, moved fast to block the nation’s president on their platforms. While everybody was still in shock, a second wave hit. Apple’s iOS App stores, Google’s Android Play stores, Amazon Web Services, and other service providers decided to remove Parler, an app used by Trump supporters in the riot and mostly favored by conservatives. Finding himself virtually homeless, President Trump relocated to TikTok, a Chinese owned short-video sharing app   relentlessly sought to ban ever since July 2020. Ironically but not unexpected, TikTok banned President Trump before he could even ban TikTok.

Dating back to June 2020, the fight between TikTok and President Trump germinated when the app’s Chinese parent company ByteDance was accused of discreetly accessing the clipboard content on their users’ iOS devices. Although the company argued that the accused technical feature was set up as an “anti-spam” measure and would be immediately stopped, the Trump administration signed Executive Order 13942 on August 6, 2020, citing national security concerns to ban the app in five stages. TikTok responded swiftly , the District Court for the District of Columbia issued a preliminary injunction on September 27, 2020. At the same while, knowing that the root of problem lies in its “Chinese nationality,” ByteDance desperately sought acquisition by U.S. corporations to make TikTok US-owned to dodge the ruthless banishment, even willing to give up billions of dollars and, worse, its future in the U.S. market. The sale soon drew qualified bidders including Microsoft, Oracle, and Walmart, but has not advanced far since September due to the pressure coming from both Washington and Beijing.

TikTok, in the same Executive Order was another Chinese app called WeChat. If banning TikTok means that American teens will lose their favorite virtual platform for life-sharing amid the pandemic, blocking WeChat means much more. It heavily burdens one particular minority group––hundreds and thousands of Chinese Americans and Chinese citizens in America who use WeChat. This group fear losing connection with families and becoming disengaged from the social networks they have built once the vital social platform disappears. For more insight, this is a blog post that talks about the impact of the WeChat ban on Chinese Students studying in the United States.

In response to the WeChat ban, several Chinese American lawyers led the creation of U.S. WeChat Users Alliance. Supported by thousands of U.S. WeChat users, the Alliance is a non-profit organization independent of Tencent, the owner of WeChat, and was formed on August 8, 2020 to advocate for all that are affected by the ban. Subsequently, the Alliance brought suit in the United States District Court for the Northern District of California against the Trump administration and received its first victory in court on September 20, 2020 as Judge Laurel Beeler issued a preliminary injunction against Trump’s executive order.

Law is powerful. Article Two of the United States Constitution vested the broad executive power in the president of this country to discretionally determine how to enforce the law via issuance of executive orders. Therefore, President Trump was able to hunt a cause that seemed satisfying to him and banned TikTok and WeChat for their Chinese “nationality.” Likewise, the First Amendment of the Constitution and section 230 of the Communication Decency Act empowers private Internet forum providers to screen and block offensive material. Thus, TikTok, following its peers, finds its legal justification to ban President Trump and Apple can keep Parler out of reach from Trump supporters. But power can corrupt. It is true that TikTok and WeChat are owned by Chinese companies, but an app, a technology, does not take on nationality from its ownership. What happened on January 6, 2021 in the Capitol Building was a shame but does not justify removal of Parler. Admittedly, regulations and even censorship on private virtual platforms are necessary for national security and other public interest purposes. But the solution shouldn’t be simply making platforms unavailable.

As a Chinese student studying in the United States, I personally felt the of the WeChat ban. I feel fortunate that the judicial check the U.S. legal system puts on the executive power saved WeChat this time, but I do fear for the of internet forum regulation.

 


Becoming “[COVID]aware” of the Debate Around Contact Tracing Apps

Ellie Soskin, MJLST Staffer

As COVID-19 cases continue to surge, states have ramped up containment efforts in the form of mask mandates, business closures, and other public health interventions. Contact tracing is a vital part of those efforts: health officials identify those who have been in close contact with individuals diagnosed with COVID-19 and alert them of their potential exposure to the virus, while withholding identifying information. But traditional contact tracing for a true global pandemic requires a lot of resources. Accordingly, a number of regions have looked to smartphone-based exposure notification technology as an innovative way to both supplement and automate containment efforts.

Minnesota is one of the latest states to adopt this approach: on November 23rd, the state released “COVIDaware” a phone application designed to notify individuals if they’ve been exposed to someone diagnosed with COVID-19. Minnesota’s application utilizes a notification technology developed jointly by Apple and Google, joining sixteen other states and the District of Columbia, with more expected to roll out in the coming weeks. The nature of the technology raises a number of complex concerns over data protection and privacy. Additionally, these apps are more effective the more people use them and lingering questions remain as to compliance and the feasibility of mandating use.

The joint Apple/Google notification software used in Minnesota is designed with an emphasis on privacy. The software uses anonymous identifying numbers (“keys”) that change rapidly, does not solicit identifying information, does not provide access to GPS data, and only stores data locally on each user’s phone, rather than in a server. The keys are exchanged via localized Bluetooth connection operating in the background. It can also be turned off and relies wholly on self-reports. For Minnesota, accurate reports come in the form of state-issued verification codes provided with positive test results. The COVIDaware app checks daily to see if any keys contacted within the last 14 days have recorded positive test results. Minnesota policymakers, likely aware of the intense privacy concerns triggered by contact tracing apps, have emphasized the minimal data collection required by COVIDaware.

The data privacy regulatory scheme in the United States is incredibly complex, as there is no single unified federal data protection policy. Instead, the sphere is dominated by individual states. Federal law enters into the picture primarily via the Health Insurance Portability and Accountability Act (“HIPAA”), which does not apply to patients voluntarily giving health information to third parties. In response to concerns over contact tracing app data, multiple data privacy bills were introduced to Congress, but even the bipartisan “Exposure Notification Privacy Act” remains unpassed.

Given the decentralized nature of the internet, applications tend to be designed to comply with all 50 states’ policies. However, in this case, state-created contact tracing applications are designed for local use, so from a practical perspective states may only have to worry about compliance with neighboring states’ data privacy acts. The Minnesota Government Data Practices Act passed in 1974 is the only substantive Minnesota state statute affecting data collection and neighboring states’ (Wisconsin, Iowa, North Dakota, and South Dakota) laws have similarly limited or dated schemes. In this specific case, the privacy-focused Apple/Google API that forms the backbone of COVIDaware and the design of the app itself, described briefly above, likely keep it complaint. In fact, some states have expressed frustration at the degree of individual privacy afforded by the Apple/Google API, saying it can stymie coordinated public health efforts.

Of course, one solution to even minimal data privacy concerns is simply not to use the application. But the efficacy of contact tracing apps depends entirely on whether people actually download and use them. Some countries have opted for degrees of mandatory use: China has mandated adoption of its contact tracing app for every citizen, utilizing unprecedented government surveillance to flag individuals potentially exposed, and India has made employers responsible for ensuring every employee download its government-developed contact tracing app. While a similar employer-based approach is not legally impossible in the United States, any such mandate would be legally complex, and anyone following the controversy over mask mandates should instinctively recognize that a mandated government tracking app is a hard sell (to put it lightly).

But mandates may not even be necessary. Experts have emphasized that universal compliance isn’t necessary for an app to be effective: every user helps. Germany and Ireland have not mandated use, but have download rates of 20% and 37% respectively. Some have proposed small, community-focused launches of tracking apps, similar to successful start-ups. With proper marketing and transparency, states need not even enter the sticky legal mess that is mandating compliance.

Virtually every policy response to COVID in the United States has been met with heated controversy and tracking apps are no different. As these apps are in their infancy, legal challenges have yet to emerge, but the area in general is something of a minefield. The limited and voluntary nature of Minnesota’s COVIDaware app likely places it out of the realm of significant legal challenges and significant data privacy concerns, at least for the moment. The general conversation around contact tracing apps is a much larger one, however, and has helped put data privacy and end user control into the global conversation.

 

 

 

 

 


Hacking the Circuit Split: Case Asks Supreme Court to Clarify the CFAA

Kate Averwater, MJLST Staffer

How far would you go to make sure your friend’s love interest isn’t an undercover cop? Would you run an easy search on your work computer? Unfortunately for Nathan Van Buren, his friend was part of an FBI sting operation and his conduct earned him a felony conviction under the Computer Fraud and Abuse Act (CFAA), 18 USC § 1030.

Van Buren, formerly a police sergeant in Georgia, was convicted of violating the CFAA. His acquaintance turned informant for the FBI and recorded their interactions. Van Buren knew Andrew Albo from Albo’s previous brushes with law enforcement. He asked Van Buren to run the license plate number of a dancer. Albo claimed he was interested in her and wanted to make sure she wasn’t an undercover cop. Trying to better his financial situation, Van Buren told Albo he needed money. Albo gave Van Buren a fake license plate number and $6,000. Van Buren then ran the fake number in the Georgia Crime Information Center (GCIC) database. Albo recorded their interactions and the trial court convicted Van Buren of honest-services wire fraud (18 USC §§ 1343, 1346) and felony computer fraud under the CFAA.

Van Buren appealed and the Eleventh Circuit vacated and remanded the honest-services wire fraud conviction but upheld the felony computer fraud conviction. His case is currently on petition for review before the Supreme Court.

The relevant portion of the CFAA criminalizes obtaining “information from any protected computer” by “intentionally access[ing] a computer without authorization or exceed[ing] authorized access.” Van Buren’s defense was that he had authorized access to the information. However, he admitted that he used it for an improper purpose. This disagreement over access restrictions versus use restrictions is the crux of the circuit split.  Van Buren’s petition emphasizes the need for the Supreme Court to resolve these discrepancies.

Most favorable to Van Buren is the Ninth Circuit’s reading of the CFAA. The court previously held that the CFAA did not criminalize abusing authorized access for impermissible purposes. Recently, the Ninth Circuit reaffirmed this interpretation. The Second and Fourth Circuits align with the Ninth in interpreting the CFAA narrowly, declining to criminalize conduct similar to Van Buren’s.

In affirming his conviction, the Eleventh Circuit rested on their previous decision in Rodriguez, a much broader reading of the CFAA. The First, Fifth, and Seventh Circuits join the Eleventh in interpreting the CFAA to include inappropriate use.

Van Buren’s case has sparked a bit of controversy and prompted multiple organizations to file amicus briefs. They are pushing the Supreme Court to interpret the CFAA in a narrow way that does not criminalize common activities. Broad readings of the CFAA lead to criticism of the law as “a tool ripe for abuse.”

Whether or not the Supreme Court agrees to hear the case, next time someone offers you $6,000 to do a quick search on your work computer, say no.


Forget About Quantum Computers Cracking Your Encrypted Data, Many Believe End-to-End Encryption Will Lose Out as a Matter of Policy

Ian Sannes, MJLST Staffer

As reported in Nature, Google recently announced they finally achieved quantum supremacy, which is the point when computers that work based on the spin of qubits, rather than how all conventional computers work, are finally able to solve problems faster than conventional computers. However, using quantum computers is not a threat to encryption any time soon according to John Preskill, who coined the term “quantum supremacy,” rather such theorized uses remain many years out. Furthermore, the question remains whether quantum computers are even a threat to encryption at all. IBM recently showcased one way to encrypt data that is immune to the theoretical cracking ability of future quantum computers. It seems that while one method of encryption is theoretically prone to attack by quantum computers, the industry will simply adopt methods that are not prone to such attacks when it needs to.

Does this mean that end-to-end encryption methods will always protect me?

Not necessarily. Stewart Baker opines there are many threats to encryption such as homeland security policy, foreign privacy laws, and content moderation, which he believes will win out over the right to have encrypted private data.

The highly-publicized efforts of the FBI in 2016 to try to force Apple to unlock encryption on an iPhone for national security reasons ended in the FBI dropping the case when they hired a third party who was able to crack the encryption. This may seem like a win for Silicon Valley’s historically pro-encryption stance but foreign laws, such as the UK’s Investigatory Powers Act, are opening the door for government power in obtaining user’s digital data.

In October of 2019 Attorney General Bill Barr requested that Facebook halt its plans to implement end-to-end encryption on its messaging services because it would prevent investigating serious crimes. Zuckerberg, the CEO of Facebook, admitted it would be more difficult to identify and remove harmful content if such an encryption was implemented, but has yet to implement the solution.

Some believe legislators may simply force software developers to create back doors to users’ data. Kalev Leetaru believes content moderation policy concerns will allow governments to bypass encryption completely by forcing device manufacturers or software companies to install client-side content-monitoring software that is capable of flagging suspicious content and sending decrypted versions to law enforcement automatically.

The trend seems to be headed in the direction of some governmental bypass of conventional encryption. However, just like IBM’s quantum-proof encryption was created to solve a weakness in encryption, consumers will likely find another way to encrypt their data if they feel there is a need.


Pacemakers, ICDs, and ICMs – Oh My! Implantable Heart Detection Devices

Janae Aune, MJLST Staffer

Heart attacks and heart disease kill hundreds of thousands of people in the United States every year. Heart disease affects every person differently based on their genetic and ethnic background, lifestyle, and family history. While some people are aware of their risk of heart problems, over 45 percent of sudden heart cardiac deaths occur outside of the hospital. With a condition as spontaneous as heart attacks, accurate information tracking and reporting is vital to effective treatment and prevention. As in any market, the market for heart monitoring devices is diverse, with new equipment arriving every year. The newest device in a long line of technology is the LINQ monitoring device. LINQ builds on and works with already established devices that have been used by the medical community.

Pacemakers were first used effectively in 1969 when lithium batteries were invented. These devices are surgically implanted under the skin of a patient’s chest and are meant to help control the heartbeat. These devices can be implanted for temporary or permanent use and are usually targeted at patients who experience bradycardia, a slow heart rate. These devices require consistent check-ins by a doctor, usually every three to six months. Pacemakers must also be replaced every 5 to 15 years depending on how long the battery life lasts. These devices revolutionized heart monitoring but involve significant risks with the surgery and potential device malfunctioning.

Implantable cardioverter defibrillators (ICD) are also surgically implanted devices but differ from pacemakers in that they deliver one shock when needed rather than continuous electrode shocks. ICDs are similar to the heart paddles doctors use when trying to stimulate a heart in the hospital – think yelling “charge” and the paddles they use. These devices are used mostly in patients with tachycardia, a heartbeat that is too fast. Implantation of an ICD requires feeding wires through the blood vessels of the heart. A subcutaneous ICD (S-ICD) has been newly developed and gives patients who have structural defects in their heart blood vessels another option of ICDs. Similar to pacemakers, an ICD monitors activity constantly, but will be read only at follow-up appointments with the doctor. ICDs last an average of seven years before the battery will need to be replaced.

The Reveal LINQ system is a newly developed heart monitoring device that records and transmits continuous information to a patient’s doctor at all times. The system requires surgical implantation of a small device known as the insertable cardiac monitor (ICM). The ICM works with another component called the patient monitor, which is a bedside monitor that transmits the continuous information collected by the ICM to a doctor instantly. A patient assistant control is also available which allows the patient to manually mark and record particular heart activities and transmit those in more detail. The LINQ system allows a doctor to track a patient’s heart activity remotely rather than requiring the patient to come in for the history to be examined. Continuous tracking and transmitting allow a patient’s doctor to more accurately examine heart activity and therefore create a more effective treatment approach.

With the development of wearable technology meant to track health information and transmit it to the wearer, the development of devices such as the LINQ system provide new opportunities for technologies to work together to promote better health practices. The Apple Watch series 4 included electrocardiogram monitoring that records heart activity and checks the reading for atrial fibrillation (AFB). This is the same heart activity pacemakers, ICDs, and the LINQ system are meant to monitor. The future capability of heart attack and disease detection and treatment could be massively impacted by the ability to monitor heart behavior in multiple different ways. Between the ability to shock your heart, continuously monitor and transmit information about it, and report to you when your heart rate may be experiencing abnormalities from a watch it seems as if a future of decreased heart problems could be a reality.

With all of these newly developed methods of continuous tracking, it begs the question of how all of that information is protected? Health and heart behavior, which is internal and out of your control, is as personal as information gets. Electronic monitoring and transmission of this data opens it up to cybersecurity targeting. Cybersecurity and data privacy issues with these devices have started to be addressed more fully, however the concerns differ depends on which implantable device a patient has. Vulnerabilities have been identified with ICD devices which would allow an unauthorized individual to access and potentially manipulate the device. Scholars have argued that efforts to decrease vulnerabilities should be focused on protecting the confidentiality, integrity, and availability of information transmitted by implantable devices. The FDA has indicated that the use of a home monitor system could decrease the potential vulnerabilities. As the benefits from heart monitors and heart data continue to grow, we need to be sure that our privacy protections grow with it.


Wearable, Shareable, Terrible? Wearable Technology and Data Protection

Alex Wolf, MJLST Staffer

You might consider the first wearable technology of the modern-day to be the Sony Walkman, which celebrates its 40th anniversary this year. After the invention of Bluetooth 1.0 in 2002, commercial competitors began to realize the vast promise that this emergent technology afforded. Fifteen years later, over 265 million wearable tech devices are sold annually. It looks to be a safe bet that this trend will continue.

A popular subset of wearable technology is the fitness tracker. The user attaches the device to themselves, usually on their wrist, and it records their movements. Lower-end trackers record basics like steps taken, distance walked or run, and calories burned, while the more sophisticated ones can track heart rate and sleep statistics (sometimes also featuring fun extras like Alexa support and entertainment app playback). And although this data could not replace the care and advice of a healthcare professional, there have been positive health results. Some people have learned of serious health problems only once they started wearing a fitness tracker. Other studies have found a correlation between wearing a FitBit and increased physical activity.

Wearable tech is not all good news, however; legal commentators and policymakers are worried about privacy compromises that result from personal data leaving the owner’s control. The Health Insurance Portability and Protection Act (HIPAA) was passed by Congress with the aim of providing legal protections for individuals’ health records and data if they are disclosed to third parties. But, generally speaking, wearable tech companies are not bound by HIPAA’s reach. The companies claim that no one else sees the data recorded on your device (with a few exceptions, like the user’s express written consent). But is this true?

A look at the modern American workplace can provide an answer. Employers are attempting to find new ways to manage health insurance costs as survey data shows that employees are frequently concerned with the healthcare plan that comes with their job. Some have responded by purchasing FitBits and other like devices for their employees’ use. Jawbone, a fitness device company on its way out, formed an “Up for Groups” plan specifically marketed towards employers who were seeking cheaper insurance rates for their employee coverage plans. The plan allows executives to access aggregate health data from wearable devices to help make cost-benefit determinations for which plan is the best choice.

Hearing the commentators’ and state elected representatives’ complaints, members of Congress have responded; Senators Amy Klobuchar and Lisa Murkowski introduced the “Protecting Personal Health Data Act” in June 2019. It would create a National Task Force on Health Data Protection, which would work to advise the Secretary of Health and Human Services (HHS) on creating practical minimum standards for biometric and health data. The bill is a recognition that HIPAA has serious shortcomings for digital health data privacy. As a 2018 HHS Committee Report noted, “A class of health records that can be subject to HIPAA or not subject to HIPAA is personal health records (PHRs) . . . PHRs not subject to HIPAA . . . [have] no other privacy rules.”  Dena Mendolsohn, a lawyer for Consumer Reports, remarked favorably that the bill is needed because the current framework is “out of date and incomplete.”

The Supreme Court has recognized privacy rights in cell-site location data, and a federal court recognized standing to sue for a group of plaintiffs whose personally identifiable information (PII) was hacked and uploaded onto the Dark Web. Many in the legal community are pushing for the High Court to offer clearer guidance to both tech consumers and corporations on the state of protection of health and other personal data, including private rights of action. Once there is a resolution on these procedural hurdles, we may see firmer judicial directives on an issue that compromises the protected interests of more and more people.

 


Google Fined for GDPR Non-Compliance, Consumers May Not Like the Price

Julia Lisi, MJLST Staffer

On January 14th, 2019, France’s Data Protection Authority (“DPA”) fined Google 50 million euros in one of the first enforcement actions taken under the EU’s General Data Protection Regulation (“GDPR”). The GDPR, which took effect in May of 2018, sent many U.S. companies scrambling in attempts to update their privacy policies. You, as a consumer, probably had to re-accept updated privacy policies from your social media accounts, phones, and many other data-based products. Google’s fine makes it the first U.S. tech giant to face GDPR enforcement. While a 50 million euro (roughly 57 million dollars) fine may sound hefty, it is actually relatively small compared to maximum fine allowed under the GDPR, which, for Google, would be roughly five billion dollars.

The French fine clarifies a small portion of the uncertainty surrounding GDPR enforcement. In particular, the French DPA rejected Google’s methods for getting consumers to consent to its  Privacy Policy and Terms of Service. The French DPA took issue with the (1) numerous steps users faced before they could opt out of Google’s data collection, (2) the pre-checked box indicating users’ consent, and (3) the inability of users to consent to individual data processes, instead requiring whole cloth acceptance of both Google’s Privacy Policy and Terms of Service.

The three practices rejected by the French DPA are commonplace in the lives of many consumers. Imagine turning on your new phone for the first time and scrolling through seemingly endless provisions detailing exactly how your daily phone use is tracked and processed by both the phone manufacturer and your cell provider. Imagine if you had to then scroll through the same thing for each major app on your phone. You would have much more control over your digital footprint, but would you spend hours reading each provision of the numerous privacy policies?

Google’s fine could mark the beginning of sweeping changes to the data privacy landscape. What once took a matter of seconds—e.g., checking one box consenting to Terms of Service—could now take hours. If Google’s fine sets a precedent, consumers could face another wave of re-consenting to data use policies, as other companies fall in line with the GDPR’s standards. While data privacy advocates may applaud the fine as the dawn of a new day, it is unclear how the average consumer will react when faced with an in-depth consent process.


A Data Privacy Snapshot: Big Changes, Uncertain Future

Holm Belsheim, MJLST Staffer

When Minnesota Senator Amy Klobuchar announced her candidacy for the Presidency, she stressed the need for new and improved digital data regulation in the United States. It is perhaps telling that Klobuchar, no stranger to internet legislation, labelled data privacy and net neutrality as cornerstones of her campaign. While data bills have been frequently proposed in Washington, D.C., few members of Congress have been as consistently engaged in this area as Klobuchar. Beyond expressing her longtime commitment to the idea, the announcement may also be a savvy method to tap into recent sentiments. Over the past several years citizens have experienced increasingly intrusive breaches of their information. Target, Experian and other major breaches exposed the information of hundreds of millions of people, including a shocking 773 million records in a recent report. See if you were among them. (Disclaimer: neither I nor MJLST are affiliated with these sites, nor can we guarantee accuracy.)

Data privacy has been big news in recent years. Internationally, Brazil, India and China are have recently put forth new legislation, but the big story was the European Union’s General Data Privacy Regulation, or GDPR, which began enforcement last year. This massive regulatory scheme codifies the European presumption that an individual’s data is not available for business purposes without the individual’s explicit consent, and even then only in certain circumstances. While the scheme has been criticized as both vague and overly broad, one crystal clear element is the seriousness of its enforcement capabilities. Facebook and Google each received large fines soon after the GDPR’s official commencement, and other companies have partially withdrawn from the EU in the face of compliance requirements. No clear challenge has emerged, and it looks like the GDPR is here to stay.

Domestically, the United States has nothing like the GDPR. The existing patchwork of federal and state laws leave much to be desired. Members of Congress propose new laws regularly, most of which then die in committee or are shelved. California has perhaps taken the boldest step in recent years, with its expansive California Consumer Protection Act (CCPA) scheduled to begin enforcement in 2020. While different from the GDPR, the CCPA similarly proposes heightened standards for companies to comply with, more remedies and transparency for consumers, and specific enforcement regimes to ensure requirements are met.

The consumer-friendly CCPA has drawn enormous scrutiny and criticism. While evincing modest support, or perhaps just lip service, tech titans like Facebook and Google are none too pleased with the Act’s potential infringement upon their access to Americans’ data. Since 2018, affected companies have lobbied Washington, D.C. for expansive and modernized federal data privacy laws. One common, though less publicized, element in these proposals is an explicit federal preemption provision, which would nullify the CCPA and other state privacy policies. While nothing has yet emerged, this issue isn’t going anywhere soon.


AI: Legal Issues Arising From the Development of Autonomous Vehicle Technology

Sooji Lee, MJLST Staffer

Have you ever heard of the “Google deep mind challenge match?” AlphaGo, the artificial intelligence (hereinafter “AI”) created by Google, had a Go game match with Lee Sedol, 18-time world champion of Go in 2016. Go game is THE most complicated human made game that has more variable moves than you can ever imagine – more than a billion more variables than a chess game. People who knew enough about the complexity of Go game did not believe that it was possible for AI to calculate all these variables to defeat the world champion, who depended more on his guts and experiences. AlphaGo, however, defeated Mr. Lee by five to one leaving the whole world amazed.

Another use of AI is to make autonomous vehicles (hereinafter “AV”), to achieve mankind’s long-time dream: driving a car without driving. Now, almost every automobile manufacturer including GM, Toyota, Tesla and others, who each have enough capital to reinvest their money on the new technology, aggressively invest in AV technologies. As a natural consequence of increasing interest on AV technology, vehicle manufacturers have performed several driving tests on AVs. Many legal issues arose as a result of the trials. During my summer in Korea, I had a chance to research legal issues for an intellectual property infringement lawsuit regarding AV technology between two automobile manufacturers.

For a normal vehicle, a natural person is responsible if there is an accident. But who should be liable when an AV malfunctions? The owner of the vehicle, the manufacturer of the vehicle, or the entity who developed the vehicle’s software? This is one of the hardest questions that arises from the commercialization of AV. I personally think that the liability could be imposed on any of the various entities depending on different scenarios. If the accident happened because of the malfunctioning of the vehicle’s AI system, the software provider should be liable. If the accident occurred because the vehicle itself malfunctioned, the manufacturer should be held liable. But if the accident occurred because the owner of the vehicle poorly managed his/her car, the owner should be held liable. To sum up, there is no one-size fits all solution to who should be held liable. Courts should consider the causal sequence of the accident when determining liability.

Also, the legislative body must take data privacy into consideration when enacting statutes governing AVs. There are tons of cars on the road. Drivers should interact with other drivers to safely get to their destination. Therefore, AVs should share locations and current situations to interact well with other AVs. This means that a single entity should collect each AVs information and calculate it to prevent accidents or to effectively manage traffic. Nowadays, almost every driver is using navigation. This means that people must provide their location to a service provider, such as Google maps. Some may argue that service providers like Google maps already serve as a collector of vehicle information. But there are many navigation services. Since all AVs must interact with each other, centralizing the data with one service provider is wise. While centralizing the data and limiting consumer choice to one service provider is advisable, the danger of a data breach would be heightened should one service provider be selected. This is an important and pressing concern for legislatures considering enacting legislation regarding centralizing AV data with one service provider.

Therefore, enacting an effective, smart, and predictive statute is important to prevent potential problems. Notwithstanding its complexity, many states in the U.S. take a positive stance toward the commercialization of AV since the industry could become profitable. According to statistics from National Conference of State Legislatures, 33 states have introduced legislation and 10 states have issued executive orders related to AV technology. For example, Florida’s 2016 legislation expands allowed operation of autonomous vehicles on public roads. Also, Arizona’s Governor issued an executive order which encouraged the development of relevant technologies. With this steps, development of a legal shield is possible someday.