Privacy

The Future of Neurotechnology: Brain Healing or Brain Hacking?

Gordon Unzen, MJLST Staffer

Brain control and mindreading are no longer ideas confined to the realm of science fiction—such possibilities are now the focus of science in the field of neurotechnology. At the forefront of the neurotechnology revolution is Neuralink, a medical device company owned by Elon Musk. Musk envisions that his device will allow communication with a computer via the brain, restore mobility to the paralyzed and sight to the blind, create mechanisms by which memories can be saved and replayed, give rise to abilities like telepathy, and even transform humans into cyborgs to combat sentient artificial intelligence (AI) machines.[1]

Both theoretical and current applications of brain-interfacing devices, however, raise concerns about infringements upon privacy and freedom of thought, with the technology providing intimate information ripe for exploitation by governments and private companies.[2] Now is the time to consider how to address the ethical issues raised by neurotechnology so that people may responsibly enjoy its benefits.

What is Neurotechnology?

Neurotechnology describes the use of technology to understand the brain and its processes, with goals to control, repair, or improve brain functioning.[3] Neurotechnology research uses techniques that record brain activity such as functional magnetic resonance imaging (fMRI), and that stimulate the brain such as transcranial electrical stimulation (tES).[4] Both research practices and neurotechnological devices can be categorized as invasive, wherein electrodes are surgically implanted in the brain, or non-invasive, which do not require surgery.[5] Neurotechnology research is still in its infancy but development rates will likely continue accelerating with the use of increasingly advanced AI to help make sense of the data.[6]

Work in neurotechnology has already led to the proliferation of applications impacting fields from medicine to policing. Bioresorbable electronic medication speeds up nerve regeneration, deep brain stimulators function as brain pacemakers targeting symptoms of diseases like Parkinson’s, and neurofeedback visualizes brain activity for the real-time treatment of mental illnesses like depression.[7] Recently, a neurotechnological device that stimulates the spinal cord allowed a stroke patient to regain control of her arm.[8]  Electroencephalogram (EEG) headsets are used by gamers as a video game controller and by transportation services to track when a truck driver is losing focus.[9] In China, the government uses caps to scan employees’ brainwaves for signs of anxiety, rage, or fatigue.[10] “Brain-fingerprinting” technology, which analyzes whether a subject recognizes a given stimulus, has been used by India’s police since 2003 to ‘interrogate’ a suspect’s brain, although there are questions regarding the scientific validity of the practice.[11]

Current research enterprises in neurotechnology aim to push the possibilities much further. Mark Zuckerberg’s Meta financed invasive neurotechnology research using an algorithm that decoded subject’s answers to simple questions from brain activity with a 61% accuracy.[12] The long-term goal is to allow everyone to control their digital devices through thought alone.[13] Musk similarly aims to begin human trials for Neuralink devices designed to help paralyzed individuals communicate without the need for typing, and he hopes this work will eventually allow Neuralink to fully restore their mobility.[14] However, Musk has hit a roadblock in failing to acquire FDA approval for human-testing, despite claiming that Neuralink devices are safe enough that he would consider using them on his children.[15] Others expect that neurofeedback will eventually see mainstream deployment through devices akin to a fitness tracker, allowing people to constantly monitor their brain health metrics.[16]

Ethical Concerns and Neurorights

Despite the possible medical and societal benefits of neurotechnology, it would be dangerous to ignore the ethical red flags raised by devices that can observe and impose on brain functioning. In a world of increasing surveillance, the last bastion of privacy and freedom exists in the brain. This sanctuary is lost when even the brain is subject to data collection practices. Neurotechnology may expose people to dystopian thought policing and hijacking, but more subtly, could lead to widespread adverse psychological consequences as people live in constant fear of their thoughts being made public.[17]

Particularly worrisome is how current government and business practices inform the likely near-future use of data collected by neurotechnology. In law enforcement contexts such as interrogations, neurotechnology could allow the government to cause people to self-incriminate in violation of the Fifth Amendment. Private companies that collect brain data may be required to turn it over to governments, analogous to the use of Fitbit data as evidence in court.[18] If the data do not go to the government, companies may instead sell them to advertisers.[19] Even positive implementations can be taken too far. EEG headsets that allow companies to track the brain activity of transportation employees may be socially desirable, but the widespread monitoring of all employees for productivity is a plausible and sinister next step.

In light of these concerns, ethicist and lawyer Nita Farahany argues for updating human rights law to protect cognitive privacy and liberty.[20] Farahany describes a right of self-determination regarding neurotechnology to secure freedom from interference, to access the technology if desired, and to change one’s own brain by choice.[21] This libertarian perspective acknowledges the benefits of neurotechnology for which many may be willing to sacrifice privacy, while also ensuring that people have an opportunity to say no its imposition. Others take a more paternalistic approach, questioning whether further regulation is needed to limit possible neurotechnology applications. Sigal Samuel notes that cognitive-enhancing tools may create competition that requires people to either use the technology or get left behind.[22] Decisions to engage with neurotechnology thus will not be made with the freedom Farahany imagines.

Conclusion

Neurotechnology holds great promise for augmenting the human experience. The technology will likely play an increasingly significant role in treating physical disabilities and mental illnesses. In the near future, we will see the continued integration of thought as a method to control technology. We may also gain access to devices offering new cognitive abilities from better memory to telepathy. However, using this technology will require people to give up extremely private information about their brain functions to governments and companies. Regulation, whether it takes the form of a revamped notion of human rights or paternalistic lawmaking limiting the technology, is required to navigate the ethical issues raised by neurotechnology. Now is the time to act to protect privacy and liberty.

[1] Rachel Levy & Marisa Taylor, U.S. Regulators Rejected Elon Musk’s Bid to Test Brain Chips in Humans, Citing Safety Risks, Reuters (Mar. 2, 2023), https://www.reuters.com/investigates/special-report/neuralink-musk-fda/.

[2] Sigal Samuel, Your Brain May Not be Private Much Longer, Vox (Mar. 17, 2023), https://www.vox.com/future-perfect/2023/3/17/23638325/neurotechnology-ethics-neurofeedback-brain-stimulation-nita-farahany.

[3] Neurotechnology, How to Reveal the Secrets of the Human Brain?, Iberdrola,https://www.iberdrola.com/innovation/neurotechnology#:~:text=Neurotechnology%20uses%20different%20techniques%20to,implantation%20of%20electrodes%20through%20surgery(last accessed Mar. 19, 2023).

[4] Id.

[5] Id.

[6] Margaretta Colangelo, how AI is Advancing NeuroTech, Forbes (Feb. 12, 2020), https://www.forbes.com/sites/cognitiveworld/2020/02/12/how-ai-is-advancing-neurotech/?sh=277472010ab5.

[7] Advances in Neurotechnology Poised to Impact Life and Health Insurance, RGA (July 19, 2022), https://www.rgare.com/knowledge-center/media/research/advances-in-neurotechnology-poised-to-impact-life-and-health-insurance.

[8] Stroke Patient Regains Arm Control After Nine Years Using New Neurotechnology, WioNews (Feb. 22, 2023), https://www.wionews.com/trending/stroke-patients-can-regain-arm-control-using-new-neurotechnology-says-research-564285.

[9] Camilla Cavendish, Humanity is Sleepwalking into a Neurotech Disaster, Financial Times (Mar. 3, 2023), https://www.ft.com/content/e30d7c75-90a3-4980-ac71-61520504753b.

[10] Samuel, supra note 2.

[11] Id.

[12] Sigal Samuel, Facebook is Building Tech to Read your Mind. The Ethical Implications are Staggering, Vox (Aug. 5, 2019), https://www.vox.com/future-perfect/2019/8/5/20750259/facebook-ai-mind-reading-brain-computer-interface.

[13] Id.

[14] Levy & Taylor, supra note 1.

[15] Id.

[16] Manuela López Restrepo, Neurotech Could Connect Our Brains to Computers. What Could Go Wrong, Right?, NPR (Mar. 14, 2023), https://www.npr.org/2023/03/14/1163494707/neurotechnology-privacy-data-tracking-nita-farahany-battle-for-brain-book.

[17] Vanessa Bates Ramirez, Could Brain-Computer Interfaces Lead to ‘Mind Control for Good’?, Singularity Hub (Mar. 16, 2023), https://singularityhub.com/2023/03/16/mind-control-for-good-the-future-of-brain-computer-interfaces/.

[18] Restrepo, supra note 16.

[19] Samuel, supra note 12.

[20] Samuel, supra note 2.

[21] Id.

[22] Id.


Mental Health Telehealth Services May Not Be Protecting Your Data

Tessa Wright, MJLST Staffer

The COVID-19 pandemic changed much about our daily lives, and nowhere have those changes been more visible than in the healthcare industry. During the pandemic, there were overflowing emergency rooms coupled with doctor shortages.[1] In-person medical appointments were canceled, and non-emergency patients had to wait months for appointments.[2] In response, the use of telehealth services began to increase rapidly.[3] In fact, one 2020 study found that telehealth visits accounted for less than 1% of health visits prior to the pandemic and increased to as much as 80% of visits when the pandemic was at its peak.[4] And, while the use of telehealth services has decreased slightly in recent years, it seems as though it is likely here to stay. Nowhere has the use of telehealth services been more prevalent than in mental health services.[5] Indeed, as of 2022, telehealth still represented over 36% of outpatient mental health visits.[6] Moreover, a recent study found that since 2020, over one in three mental health outpatient visits have been delivered by telehealth.[7] And while this increased use in telehealth services has helped make mental health services more affordable and accessible to many Americans, this shift in the way healthcare is provided also comes with new legal concerns that have yet to be fully addressed.

Privacy Concerns for Healthcare Providers

One of the largest concerns surrounding the increased use of telehealth in mental health services is privacy. There are several reasons for this. The primary concern has been due to the fact that telehealth takes place over the phone or via personal computers. When using personal devices, it is nearly impossible to ensure HIPAA compliance. However, the majority of healthcare providers now offer telehealth options that connect directly to their private healthcare systems, which allows for more secure data transmission.[8] While there are still concerns surrounding this issue, these secure servers have helped mitigate much of the concern.[9]

Privacy Concerns with Mental Health Apps

The other privacy concern surrounding the use of telehealth services for mental health is a little more difficult to address. This concern comes from the increased use of mental health apps. Mental health apps are mobile apps that allow users to access online talk therapy and psychiatric care.[10] With the increased use of telehealth for mental health services, there has also been an increase in the use of these mental health apps. Americans are used to their private medical information being protected by the Health Insurance Portability and Accountability Act (HIPAA).[11] HIPAA is a federal law that creates privacy rules for our medical records and other individually identifiable health information during the flow of certain health care transactions.[12] But HIPAA wasn’t designed to handle modern technology.[13] The majority of mental health apps are not covered by HIPAA rules, meaning that these tech companies can sell the private health data from their apps to third parties, with or without consent.[14] In fact, a recent study that analyzed 578 mental health-related apps found that nearly half (44%) of the apps shared users’ personal health information with third parties.[15] This personal health information can include psychiatric diagnoses and medication prescriptions, as well as other identifiers including age, gender, ethnicity, religion, credit score, etc.[16]

In fact, according to a 2022 study, a popular therapy app, BetterHelp, was among the worst offenders in terms of privacy.[17] “BetterHelp has been caught in various controversies, including a ‘bait and switch’ scam where it advertised therapists that weren’t actually on its service, poor quality of care (including trying to provide gay clients with conversion therapy), and paying YouTube influencers if their fans sign up for therapy through the app.”[18]

An example of information that does get shared is the intake questionnaire.[19] An intake questionnaire needs to be filled out on BetterHelp, or other therapy apps, in order for the customer to be matched with a provider.[20] The answers to these intake questionnaires were specifically found to have been shared by BetterHelp with an analytics company, along with the approximate location and device of the user.[21]

Another example of the type of data that is shared is metadata.[22] BetterHelp can share information about how long someone uses the app, how long the therapy sessions are, how long someone spends sending messages on the app, what times someone logs into the app, what times someone sends a message or speaks to their therapists, the approximate location of the user, how often someone opens the app, and so on.[23] According to the ACLU, data brokers, Facebook, and Google were found to be among the recipients of other information shared from BetterHelp.[24]

It is also important to note that deleting an account may not remove all of your personal information, and there is no way of knowing what data will remain.[25] It remains unclear how long sensitive information that has been collected and retained could be available for use by the app.

What Solutions Are There?

The U.S. Department of Health and Human Services recently released updated guidance on HIPAA, confirming that the HIPAA Privacy Rule does not apply to most health apps because they are not “covered entities” under the law.[26]  Additionally, the FDA put out guidance saying that it is going to use its enforcement discretion when dealing with mental health apps.[27] This means that if the privacy risk seems to be low, the FDA is not going to enforce or chase these companies.[28]

Ultimately, if mental telehealth services are here to stay, HIPAA will need to be expanded to cover the currently unregulated field of mental health apps. HIPAA and state laws would need to be specifically amended to include digital app-based platforms as covered entities.[29] These mental health apps are offering telehealth services, similar to any healthcare provider that is covered by HIPAA. Knowledge that personal data is being shared so freely by mental health apps often leads to distrust, and due to those privacy concerns, many users have lost confidence in them. In the long run, regulatory oversight would increase the pressure on these companies to show that their service can be trusted, potentially increasing their success by growing their trust with the public as well.

Notes

[1] Gary Drenik, The Future of Telehealth in a Post-Pandemic World, Forbes, (Jun. 2, 2022), https://www.forbes.com/sites/garydrenik/2022/06/02/the-future-of-telehealth-in-a-post-pandemic-world/?sh=2ce7200526e1.

[2] Id.

[3] Id.

[4] Madjid Karimi, et. al., National Survey Trends in Telehealth Use in 2021: Disparities in Utilization and Audio vs. Video Services, Office of Health Policy (Feb. 1, 2022).

[5] Shreya Tewari, How to Navigate Mental Health Apps that May Share Your Data, ACLU (Sep. 28, 2022).

[6] Justin Lo, et. al., Telehealth has Played an Outsized Role Meeting Mental Health Needs During the Covid-19 Pandemic, Kaiser Family Foundation, (Mar. 15, 2022), https://www.kff.org/coronavirus-covid-19/issue-brief/telehealth-has-played-an-outsized-role-meeting-mental-health-needs-during-the-covid-19-pandemic/.

[7] Id.

[8] Supra note 1.

[9] Id.

[10] Heather Landi, With Consumers’ Health and Privacy on the Line, do Mental Wellness Apps Need More Oversight?, Fierce Healthcare, (Apr. 21, 2021), https://www.fiercehealthcare.com/tech/consumers-health-and-privacy-line-does-digital-mental-health-market-need-more-oversight.

[11] Peter Simons, Your Mental Health Information is for Sale, Mad in America, (Feb. 20, 2023), https://www.madinamerica.com/2023/02/mental-health-information-for-sale/.

[12] Supra note 5.

[13] Supra note 11.

[14] Id.

[15] Deb Gordon, Using a Mental Health App? New Study Says Your Data May Be Shared, Forbes, (Dec. 29, 2022), https://www.forbes.com/sites/debgordon/2022/12/29/using-a-mental-health-app-new-study-says-your-data-may-be-shared/?sh=fe47a5fcad2b.

[16] Id.

[17] Supra note 11.

[18] Id.

[19] Supra note 5.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Supra note 5.

[26] Id.

[27] Supra note 10.

[28] Id.

[29] Supra note 11.


Data Privacy Regulations in 2023: Is the New Standard Burdensome?

Yolanda Li, MJLST Staffer

Beginning in 2023, businesses will see enhanced regulations on data privacy. There has been an increase in legal requirements for company-held data in protection of companies’ customers as a number of proposed data security laws and regulations came into effect in 2023. Specifically, the FTC Safeguards Rule and the NIS2 Directive.

The FTC Safeguards Rule

The FTC Safeguards Rule came into force in December 2022. The FTC requires non-banking financial institutions “to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe.”[1] Non-banking financial institutions affected by this rule include mortgage brokers, motor vehicle dealers, and payday lenders. The Safeguards Rule is promulgated under the Gramm-Leach-Bliley Act of 1999, which requires financial institutions to “explain their information-sharing practices to their customers and to safeguard sensitive data.”[2] Financial institutions include companies that offer consumer financial products or services like loans, insurance, and financial or investment advice.[3] Specifically, the rule required that the covered financial institutions “designate a qualified individual to oversee their information security program, develop a written risk assessment, limit and monitor who can access sensitive customer information, encrypt all sensitive information, train security personnel, develop an incident response plan, periodically assess the security practices of service providers, and implement multi-factor authentication or another method with equivalent protection for any individual accessing customer information.”

One specific question that arises is whether the FTC Safeguards Rule will truly elevate data privacy standards. On its face the FTC Safeguards Rule does not run counter to the FTC’s mission of protecting consumers. However, the economic cost and effect behind the rule is debatable. One concern is that the rule may impose substantial costs, especially on small businesses, as the new burdens will render costs that may be unbearable for small businesses with less capital than large companies. According to Commissioner Christine S. Wilson, although financial institutions are already implementing many of the requirements under the rule, or have sophisticated programs that are easily adaptable to new obligations, there are still large burdens underestimated by the FTC Safeguards Rule.[4] Specifically, labor shortages have hampered efforts by financial institutions to implement information security systems. Supply chain issues caused delays in obtaining equipment for updating information systems. What is important to note is, according to Commissioner Wilson, most of these factors are outside the control of the financial institutions. Implementing a heightened standard would thus cause unfairness, especially to small financial institutions who have even more trouble obtaining the necessary equipment during times of supply chain and labor shortages.

Recognizing such difficulties, the FTC did offer a certain extent of leniency for implementation of the rule. Specifically, the FTC extended the deadline by six months, primarily due to supply chain issues that may result in delays and shortage of qualified personnel to implement information security programs. This extension is beneficial to the Rule because it offers the covered financial institutions time for adjustment and compliance.

Another concern that the FTC Safeguards Rule has raised is that the mandates will not result in a significant reduction in data security risks in protecting customers. The answer to this question is still uncertain as the FTC Safeguards Rule just came into effect, and the extension pushes out implementation even farther. One thing to note, however, is that during the rule-making process the FTC sought comments on the proposed Safeguards Rule and during that time extended the deadline for the public to submit comments to changes by 60 days in.[5] This fact may show that the FTC took careful consideration of how to most effectively reduce data security risks by giving the public ample time to weigh in.

NIS2 Directive

A corresponding law is the NIS2 Directive by the EU that came into force on January 16, 2023. This EU-wide legislation provides a variety of legal measures to boost cybersecurity. Specifically, it requires member states to be appropriately equipped with response and information systems, set up a Corporation Group to facilitate corporate exchange of information among member states, and ensure a culture of security that relies heavily on infrastructures, including financial market infrastructure.[6] The Directive also contains a variety of security and notification requirements for service providers to comply with. The NIS2 Directive echoes the FTC Safeguards Rule to a large extent regarding the elevated standard of cybersecurity measures.

However, the NIS2 Directive contains a different measure by implementing duties onto the European Union Agency for Cybersecurity (ENISA) itself. The Directive designates that ENISA assists Member States and the Corporation Groups set up under the Directive by “identifying good practices in the Member States regarding the implementation of the NIS directive, supporting the EU-wide reporting process for cybersecurity incidents, by developing thresholds, templates and tools, agreeing on common approaches and procedures, and helping Member States to address common cybersecurity issues.”[7] The Directive ordering the agency itself to facilitate the carrying out of the Directive may add to the likelihood of success. Although the outcome is uncertain, primarily because of the broad language of the Directive, at least burdens on financial institutions will be lessened to a certain extent. What distinguishes the NIS2 Directive from the FTC Safeguards Rule is that the Member States are given 21 months to transpose to their national legislative framework.[8] This time offers more flexibility as compared to the extension of the FTC Safeguards Rule. As the Directive passes through the legislative framework, more time will be allowed for financial institutions to prepare and respond to the proposed changes.

In summary, data privacy laws are tightening up globally, and the United States should look to and learn from the successes and failures of the EU’s Directive as both countries’ are attempting to do regulate a similar industry. That being said, regardless of the EU, financial institutions in the United States must begin paying attention to and complying with the FTC Safeguards Rule. Though the outcome of the Rule is uncertain, the 6-month extension will at least offer a certain degree of flexibility.

Notes

[1]https://www.ftc.gov/news-events/news/press-releases/2022/11/ftc-extends-deadline-six-months-compliance-some-changes-financial-data-security-rule; 16 CFR 314.

[2] https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act.

[3] Id.

[4] Concurring Statement of Commissioner Christine S. Wilson, Regarding Delaying the Effective Date of Certain Provisions of the Recently Amended Safeguards Rule (Nov 2022).

[5] https://www.ftc.gov/news-events/news/press-releases/2019/05/ftc-extends-comment-deadline-proposed-changes-safeguards-rule.

[6] https://digital-strategy.ec.europa.eu/en/policies/nis2-directive.

[7] https://www.enisa.europa.eu/topics/cybersecurity-policy/nis-directive-new#:~:text=On%2016%20January%202023%2C%20the,cyber%20crisis%20management%20structure%20(CyCLONe).

[8] Id.

 


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Meta Faces Class Action Lawsuits Over Pixel Tool Data Controversy

Ray Mestad, MJLST Staffer

With a market capitalization of $341 billion, Meta Platforms is one of the most valuable companies in the world.[1] Information is a prized asset for Meta, but how that information is acquired continues to be a source of conflict. Their Meta “Pixel” tool is a piece of code that allows websites to track visitor activity.[2] However, what Meta does with the data after it is acquired may be in violation of a variety of privacy laws. Because of that, Meta is now facing almost fifty class action lawsuits due to Pixel’s use of data from video players and healthcare patient portals.[3]

What is Pixel?

Pixel is an analytical tool that tracks visitor actions on a website.[4] In theory, the actions that are tracked include purchases, registrations, cart additions, searches and more. This information can then be used by the website owners to better understand user behavior. Website owners can more efficiently use ad spend by tailoring ads to relevant users and finding more receptive users based on Pixel’s analysis.[5]

In the world of search engine optimization and web analysis tools like Pixel are common, and there are other sites, like Google Analytics, that provide similar functions. However, there are two key differences between these other tools and Pixel. First, Pixel has in some cases accidentally scraped private, identifiable information from websites. Second, Pixel can connect that information to the social profiles on their flagship website, Facebook. Whether intentionally or accidentally, Pixel has been found to have grabbed personal information beyond the simple user web actions it was supposed to be limited to and connected them to Facebook profiles.[6]

Pixel and Patient Healthcare Information

It’s estimated that, until recently, one third of the top 100 hospitals in the country used Pixel on their websites.[7] However, that number may decrease after Meta’s recent data privacy issues. Meta faced both criticism and legal action in the summer of 2022 for its treatment of user data on healthcare websites. Pixel incorrectly retrieved private patient information, including names, conditions, email addresses and more. Meta then targeted hospital website users with ads on Facebook, using the information Pixel collected from hospital websites and patient portals by matching user information with their Facebook accounts.[8] Novant Health, a healthcare provider, ran advertisements promoting vaccinations in 2020. They then added Pixel code to their website to evaluate the effectiveness of the campaign. Pixel proceeded to send private and identifiable user information to Meta.[9] Another hospital (and Meta’s co-defendant in the lawsuit), the University of California San Francisco and Dignity Health (“UCSF”), was accused of illegally gathering patient information via Pixel code on their patient portal. Private medical information was then distributed to Meta. At some point, it is claimed that pharmaceutical companies then gained access to this medical information and sent out targeted ads based thereon.[10] That is just one example – all in all, more than 1 million patients have been affected by this Pixel breach.[11] 

Pixel and Video Tracking

The problems did not stop there. Following its patient portal controversy, Meta again faced criticism for obtaining protected user data with Pixel, this time in the context of video consumption. There are currently 47 proposed class actions against Meta for violations of the Video Privacy Protection Act (the “VPPA”). The VPPA was created in the 1980’s to cover videotape and audio-visual materials. No longer confined to the rental store, the VPPA has now taken on a much broader meaning after the growth of the internet. 

These class actions accuse Meta of using the Pixel tool to take video user data from a variety of company websites, including the NFL, NPR, the Boston Globe, Bloomberg Law and many more. The classes allege that by collecting video viewing activity in a personally identifiable manner without consent (matching Facebook user IDs to the activity rather than anonymously), so Pixel users could target their ads at the viewers, Pixel violated the VPPA. Under the VPPA Meta is not the defendant in these lawsuits, but rather the companies that shared user information with Meta.[12]

Causes of Action

The relatively new area of data privacy is scarcely litigated by the federal government due to the lack of statutes protecting consumer privacy on the federal level. Because of that, the number of data protection civil litigants can be expected to continue to grow. [13] HIPAA is the Health Insurance Portability and Accountability Act, an act created in 1996 to protect patient information from disclosure without patient consent. In the patient portal cases, HIPAA actions would have to be initiated by the US government. Claimants are therefore suing Meta under consumer protection and other privacy laws like the California Confidentiality of Medical Information Act, the Federal Wiretap Act, and the Comprehensive Computer Data Access and Fraud Act instead.[14] These state Acts allow individuals to sue, when under Federal Acts like HIPPA, the Government may move slowly, or not at all. And in the cases of video tracking, the litigants may only sue the video provider, not Meta itself.[15] Despite that wrinkle of benefit to Meta, their involvement in more privacy disputes is not ideal for the tech giant as it may hurt the trustworthiness of Meta Platforms in the eyes of the public.

Possible Outcomes

If found liable, the VPPA violations could result in damages of $2,500 per class member.[16] Punitive damages for the healthcare data breaches could run in the millions as well and would vary state to state due to the variety of acts the claims are brought in violation of.[17] Specifically, in the UCSF data case class members are seeking punitive damages of $5 million.[18] One possible hang-up that may become an issue for claimants are arbitration agreements. If the terms and conditions of either hospital patient portals or video provider websites contain arbitration clauses, litigants may have difficulty overcoming them. On the one hand, these terms and conditions may be binding and force the parties to attend mandatory arbitration meetings. On the other hand, consumer rights attorneys may argue that consent needs to come from forms separate from online user agreements.[19] If more lawsuits emerge due to the actions of Pixel, it is quite possible that companies will move away from the web analytics tools to avoid potential liability. It remains to be seen whether the convenience and utility of Meta Pixel stops being worth the risk the web analytics tools present to websites.

Notes

[1] Meta Nasdaq, https://www.google.com/finance/quote/META:NASDAQ (last visited Oct. 21, 2022).

[2] Meta Pixel, Meta for Developers, https://developers.facebook.com/docs/meta-pixel/.

[3] Sky Witley, Meta Pixel’s Video Tracking Spurs Wave of Data Privacy Suits, (Oct. 13, 2022, 3:55 AM), Bloomberg Law, https://news.bloomberglaw.com/privacy-and-data-security/meta-pixels-video-tracking-spurs-wave-of-consumer-privacy-suits.

[4] Meta Pixel, https://adwisely.com/glossary/meta-pixel/ (last visited Oct. 21, 2022).

[5] Ted Vrountas, What Is the Meta Pixel & What Does It Do?, https://instapage.com/blog/meta-pixel.

[6] Steve Adler, Meta Facing Further Class Action Lawsuit Over Use of Meta Pixel Code on Hospital Websites, HIPPA Journal (Aug. 1, 2022), https://www.hipaajournal.com/meta-facing-further-class-action-lawsuit-over-use-of-meta-pixel-code-on-hospital-websites/.

[7] Id.

[8] Id.

[9] Bill Toulas, Misconfigured Meta Pixel exposed healthcare data of 1.3M patients, Bleeping Computer (Aug. 22, 2022, 2:16 PM), https://www.bleepingcomputer.com/news/security/misconfigured-meta-pixel-exposed-healthcare-data-of-13m-patients/.

[10] Adler, supra note 6.

[11] Toulas, supra note 9.

[12] Witley, supra note 3. 

[13] Id.

[14] Adler, supra note 6.

[15] Witley, supra note 3.

[16] Id

[17] Dave Muoio, Northwestern Memorial the latest hit with a class action over Meta’s alleged patient data mining, Fierce Healthcare (Aug. 12, 2022 10:30AM), https://www.fiercehealthcare.com/health-tech/report-third-top-hospitals-websites-collecting-patient-data-facebook.

[18] Id.

[19] Witley, supra note 3.




iMessedUp – Why Apple’s iOS 16 Update Is a Mistake in the Eyes of Litigators.

Carlisle Ghirardini, MJLST Staffer

Have you ever wished you could unsend a text message? Has autocorrect ever created a typo you would give anything to edit? Apple’s recent iOS 16 update makes these dreams come true. The new software allows you to edit a text message a maximum of five times for up to 15 minutes after delivery and to fully unsend a text for up to two minutes after delivery.[1] While this update might be a dream for a sloppy texter, it may become a nightmare for a victim hoping to use text messages as legal evidence. 

But I Thought my Texts Were Private?

Regardless of the passcode on your phone, or other security measures you may use to keep your correspondence private, text messages can be used as relevant evidence in litigation so long as they can be authenticated.[2] Under the Federal Rules of Evidence Rule 901(a), such authentication only requires proof sufficient to support a finding that the evidence at issue is what you claim it is.[3] Absent access to the defendant’s phone, a key way to authenticate texts includes demonstrating the personal nature of the messages, which emulate earlier communication.[4] However, for texts to be admitted as evidence beyond hearsay, proof of the messages through screenshots, printouts, or other tangible methods of authentication is vital.[5]

A perpetrator may easily abuse the iOS 16 features by crafting harmful messages and then editing or unsending them. This has several negative effects. First, the fact that this capability is available may increase perpetrator utilization of text, knowing that disappearing harassment will be easier to get away with. Further, victims will be less likely to capture the evidence in the short time before the proof is rescinded, but after the damage has already been done. Attorney Michelle Simpson Tuegal who spoke out against this software shared how “victims of trauma cannot be relied upon, in that moment, to screenshot these messages to retain them for any future legal proceedings.”[6] Finally, when the victims are without proof and the perpetrator denies sending, psychological pain may result from such “gaslighting” and undermining of the victim’s experience.[7]

Why are Text Messages so Important?

Text messages have been critical evidence in proving the guilt of the defendant in many types of cases. One highly publicized example is the trial of Michelle Carter, who sent manipulative text messages to encourage her then 22-year-old boyfriend to commit suicide.[8] Not only were these texts of value in proving reckless conduct, they also proved Carter guilty of involuntary manslaughter as her words were shown to be the cause of the victim’s death. Without evidence of this communication, the case may have turned out very differently. Who is to say that Carter would not have succeeded in her abuse by sending and then unsending or editing her messages later?

Text messaging is also a popular tool for perpetrators of sexual harassment, and it happens every day. In a Rhode Island Supreme Court case, communication via iMessage was central to the finding of 1st degree sexual assault, as the 17-year-old plaintiff felt too afraid to receive a hospital examination after her attack.[9] Fortunately, the plaintiff had saved photos of inappropriate messages the perpetrator sent after the incident, amongst other records of their texting history, which properly authenticated the texts and connected him to the crime. It is important to note, however, that the incriminating screenshots were not taken until the morning after and with the help of a family member. This demonstrates how it is not often the first instinct of a victim to immediately memorialize evidence, especially when the content may be associated with shame or trauma. The new iOS feature may take away this opportunity to help one’s case through messages which can paint a picture of the incident or the relationship between the parties.

Apple Recognized That They Messed Up

The current iOS 16 update offering two minutes to recall messages and 15 minutes to edit them is actually an amendment to Apple’s originally offered timeframe of 15 minutes to unsend. This change came in light of efforts from an advocate for survivors of sexual harassment and assault. The advocate wrote a letter to the Apple CEO warning of the dangers of this new unsending capability.[10] While the decreased timeframe that resulted leaves less room for abuse of the feature, editing is just as dangerous as unsending. With no limit to how much text you can edit, one could send full sentences of verbal abuse simply just to later edit and replace them with a one-word message. Furthermore, if someone is reading the harmful messages in real time, the shorter window only gives them less time to react – less time to save the messages for evidence. While we can hope that the newly decreased window makes perpetrators think harder before sending a text that they may not be able to delete, this is wishful thinking. The fact that almost half of young people have reported being victims to cyberbullying when there has been no option to rescind or edit one’s messages shows that the length of the iOS feature likely does not matter.[11] The abilities of the new Apple software should be disabled; their “fix” to the update is not enough. The costs of what such a feature will do to victims and their chances of success in litigation outweigh the benefits to the careless texter. 

Notes

[1] Sofia Pitt, Apple Now Lets You Edit and Unsend Imessages on Your Iphone. Here’s How to Do It, CNBC (Sep. 12, 2022, 1:12 PM), https://www.cnbc.com/2022/09/12/how-to-unsend-imessages-in-ios-16.html.

[2] FED. R. EVID. 901(a).

[3] Id.

[4] United States v. Teran, 496 Fed. Appx. 287 (4th Cir. 2012).

[5] State v. Mulcahey, 219 A.3d 735 (R.I. Sup. Ct. 2019).

[6] Jess Hollington, Latest Ios 16 Beta Addresses Rising Safety Concerns for Message Editing, DIGITALTRENDS (Jul. 27, 2022) https://www.digitaltrends.com/mobile/ios-16-beta-4-message-editing-unsend-safety-concerns-fix/

[7] Id.

[8] Commonwealth v. Carter, 115 N.E.3d 559 (Mass. Sup. Ct. 2018).

[9] Mulcahey, 219 A.3d at 740.

[10] Hollington, supra note 5.

[11] 45 Cyberbullying Statistics and Facts to Make Texting Safer, SLICKTEXT (Jan. 4, 2022) https://www.slicktext.com/blog/2020/05/cyberbullying-statistics-facts/.




Would Autonomous Vehicles (AVs) Interfere With Our Fourth Amendment Rights?

Thao Nguyen, MJLST Staffer

Traffic accidents are a major issue in the U.S. and around the world. Although car safety features are continuously enhanced and improved, traffic crashes continue to be the leading cause of non-natural death for U.S. citizens. Most of the time, the primary causes are human errors rather than instrumental failures. Therefore, autonomous vehicles (“AVs”), which promise to be the automobiles that operate themselves without the human driver, are an exciting up and coming technology, studied and developed in both academia and industry[1].

To drive themselves, AVs must be able to perform two key tasks: sensing the surrounding environment and “driving”—essentially replacing the eyes and hands of the human driver.[2] The standard AV design today includes a sensing system that collects information from the outside world, assisting the “driving” function. The sensing system is composed of a variety of sensors,[3] most commonly a Light Detection and Ranging (LiDAR) and cameras.[4] A LiDAR is a device that emits laser pulses and uses sound navigation and ranging (“SONAR”) principles to get a depth estimation of the surroundings: the emitted laser pulses travel forward, hit an object, then bounce back to the receivers; the time taken for the pulses to travel back is measured, and the distance is computed. With this information about distance and depth, a 3D point cloud map is generated about the surrounding environment. In addition to precise 3D coordinates, most LiDAR systems also record “intensity.” “Intensity” is the measure of the return strength of the laser pulse, which is based, in part, on the reflectivity of the surface struck by the laser pulse. LiDAR “intensity” data thus reveal helpful information about the surface characteristics of their surroundings. The two sensors, the camera and the LiDAR, complement each other: the former conveys rich appearance data with more details on the objects, whereas the latter is able to capture 3D measurements[5]. Fusing the information acquired by each allows the sensing system to gain a reliable environmental perception.[6]

LiDAR sensing technology is usually combined with artificial intelligence, as its goal is to imitate and eventually replace human perception in driving. Today, the majority of artificial intelligences use “machine learning,” a method that gives computers the ability to learn without explicitly being programmed. With machine learning, computers train itself to do new tasks in a similar manner as do humans: by exploring data, identifying patterns, and improving upon past experiences. Applied machine learning is data-driven: the greater the breadth and depth of the data supplied to the computer, the greater the variety and complexity of the tasks that the computer can program itself to do. Since “driving” is a combination of multiple high-complexity tasks, such as object detection, path planning, localization, lane detection, etc., an AV that drives itself requires voluminous data in order to operate properly and effectively.

“Big data” is already considered a valuable commodity in the modern world. In the case of AVs, however, this data will be of public streets and road users, and the large-scale collection of this data is empowered further by various technologies to detect and identify, track and trace, mine and profile data. When profiles about a person’s traffic movements and behaviors exist in a database somewhere, there is a great temptation for the information to be used for other purposes than the purpose for which they were originally collected, as has been the case with a lot of other “big data” today. Law enforcement officers who get their hands on these AVs data can track and monitor people’s whereabouts, pinpointing individuals whose trajectories touch on suspicious locations at a high frequency. The trajectories can be matched with the individual identified via use of car models and license plates. The police may then identify crime suspects based on being able to see the trajectories of everyone in the same town, rather than taking the trouble to identify and physically track each suspect. Can this use of data by law enforcement be sufficiently justified?

As we know, use of “helpful” police tools may be restricted by the Fourth Amendment, and for good reasons. Although surveillance helps police officers detect criminals,[7] extraneous surveillance has its social costs: restricted privacy and a sense of being “watched” by the government inhibits citizens’ productivity, creativity, spontaneity, and causes other psychological effects.[8] Case law has given us guidance to interpret and apply the Fourth Amendment standards of “trespass” or “unreasonable searches and seizures” by the police. Three principal cases, Olmstead v. United States, 277 U.S. 438 (1928), Goldman v. United States, 316 U.S. 129 (1942), and United States v. Jones, 565 U.S. 400 (2012), a modern case, limit Fourth Amendment protection to protecting against physical intrusion into private homes and properties. Such protection would not be helpful in the case of LiDAR, which operates on public street as a remote sensing technology. Nonetheless, despite the Jones case, the more broad “reasonable expectation of privacy” test established by Katz v. United States, 389 U.S. 347 (1967) is more widely accepted. Instead of tracing physical boundaries of “persons, houses, papers, and effects,” the Katz test focuses on whether there is an expectation of privacy that is socially recognized as “reasonable.” The Fourth Amendment “protects people, not places,” wrote the Katz court.[9]

United States v. Knotts, 460 U.S. 276 (1983) was a public street surveillance case that invoked the Katz test. In Knotts, the police installed a beeper on to the defendant’s vehicle to track it. The Court found that such tracking on public streets was not prohibited by the Fourth Amendment: “A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[10] The Knotts Court thus applied the Katz test and considered the question of whether there was a “reasonable expectation of privacy,” meaning that such expectation was recognized as “reasonable” by society.[11] The Court’s answer is in the negative: unlike a person in his dwelling place, a person who is traveling on public streets “voluntarily conveyed to anyone who wanted to look at the fact that he was traveling over particular roads in a particular direction.”[12]

United States v. Maynard, 615 F.3d 544 (2010), another public street surveillance case taking place in the twenty-first century, reconsidered the Knotts holding regarding “reasonable expectation of privacy” on public streets. The Maynard defendant argued that the district court erred in admitting evidence acquired by the police’s warrantless use of a Global Pointing System (GPS) device to track defendant’s movements continuously for a month.[13] The Government invoked United States v. Knotts and its holding that “[a] person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[14] The DC Circuit Court of Appeals, however, distinguished Knotts, pointing out that the Government in Knotts used a beeper that tracked a single journey, whereas the Government’s GPS monitoring in Maynard was sustained 24 hours a day continuously for one month.[15]The use of the GPS device over the course of one month did more than simply tracking defendant’s “movements from one place to another.” The result in Maynard was the discovery of the “totality and pattern” of defendant’s movement. [16]The Court is willing to make a distinction between “one path” and “the totality of one’s movement”: since someone’s “totality of movement” is much less exposed to the view of the public and much more revealing of that person’s personal life, it is constitutional for the police to track an individual on “one path,” but not that same individual’s “totality of movement.”

Thus, with time the Supreme Court appears to be recognizing that when it comes to modern surveillance technology, the sheer quantity and the revealing nature of data collected on movements of public street users ought to raise concerns. The straightforward application of these to AV sensing data would be that data concerning a person’s “one path” can be obtained and used, but not the totality of a person’s movement. It is unclear where to draw the line      between “one path” and “the totality of movement.” The surveillance in Knotts was intermittent over the course of three days,[17] whereas the defendant in Maynard was tracked for over one month. The limit would perhaps fall somewhere in between.

Furthermore, this straightforward application is complicated by the fact that the sensors utilized by AVs do not pick up mere locational information. As discussed above, AV sensing system, being composed of multiple sensors, capture both camera images and information about speed, texture, and depth of the object. In other words, AVs do not merely track a vehicle’s location like a beeper or GPS, but they “see” the vehicle through their cameras and LiDAR and radar devices, gaining a wealth of information. This means that even if only data about “one path” of a person movement is extracted, this “one path” data as processed by AV sensing systems is much more in-depth than what a beeper or CSLI can communicate. Adding to this, current developers are proposing to create AVs networks that share data among many vehicles, so that data on “one path” can potentially be combined with other data of the same vehicle’s movement, or multiple views of the same “one path” from different perspectives can be combined. The extensiveness of these data goes far beyond the precedents in Knotts and Maynard. Thus, it is foreseeable that unwarranted subpoenaing AVs sensing data is firmly within the Supreme Court’s definition of a “trespass.”

[1] Tri Nguyen, Fusing LIDAR sensor and RGB camera for object detection in autonomous vehicle with fuzzy logic approach, 2021 International Conference on Information Networking (ICOIN) 788, 788 (2021).

[2] Id. (“An autonomous vehicle or self-driving car is a vehicle having the ability to sense the surrounding environment and capable of operation on its own without any human interference. The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounting on it.”)

[3] Id. “The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounted on it.”

[4] Heng Wang and Xiaodong Zhang, Real-time vehicle detection and tracking using 3D LiDAR, Asian Journal of Control 1, 1 (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”)

[5] Id. (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”) (“Conversely, LiDARs are able to produce 3D measurements and are not affected by the illumination of the environment [9,10].”).

[6] Nguyen, supra note 1, at 788 (“Due to the complementary of two sensors, it is necessary  to gain a more reliable environment perception by fusing the  information acquired from these two sensors.”).

[7] Raymond P. Siljander & Darin D. Fredrickson, Fundamentals of Physical Surveillance: A Guide for Uniformed and Plainclothes Personnel, Second Edition (2002) (abstract).

[8] Tamara Dinev et al., Internet Privacy Concerns and Beliefs About Government Surveillance – An Empirical Investigation, 17 Journal of Strategic Information Systems 214, 221 (2008) (“Surveillance has social costs (Rosen, 2000) and inhibiting effects on spontaneity, creativity, productivity, and other psychological effects.”).

[9] Katz v. United States, 389 U.S. 347, 351 (1967).

[10] United States v. Knotts, , 460 U.S. 276, 281 (1983) (“A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”)

[11] Id. at 282.

[12] Id.

[13] United States v. Maynard, 615 F.3d 544, 549 (2010).

[14]  Id. at 557.

[15] Id. at 556.

[16] Id. at 558 “[O]nes’s movements 24 hours a day for 28 days as he moved among scores of places, thereby discovering the totality and pattern of his movements.”).

[17] Knotts at 276.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.