Data Privacy

What Happens to Your Genetic Data in a Sale or Acquisition?

Colin Loyd, MJLST Staffer

Remember 23andMe—the genetic testing company that once skyrocketed in publicity in the 2010s due to its relatively inexpensive access to genetic testing? It’s now heading toward disaster. This September, its board of directors saw all but one member tender their resignation.[1] At the close of that day’s trading, 23andMe’s share price was $0.35, representing a 99.9% decline in valuation from its peak in 2021.[2] This decline in valuation suggests the company may declare bankruptcy, which often leads to a sale of a company’s assets. Bankruptcy or the sale of assets present a host of complex privacy and regulatory issues, particularly concerning the sale of 23andMe’s most valuable asset—its vast collection of consumer DNA data.[3] This uncertain situation underscores serious concerns surrounding the state of comprehensive privacy protections for genetic information that leave consumers’ sensitive genetic data vulnerable to misuse and exploitation.

23andMe collects and stores massive amounts of user genetic information. However, unlike healthcare providers, 23andMe does not have to comply with the stringent privacy regulations set out in the Health Insurance Portability and Accountability Act (HIPAA).[4] While HIPAA is designed to protect sensitive health data, its protections apply only to a small subset of healthcare related entities.[5] HIPAA only regulates the use of genetic information by “group health plan[s], health insurance issuer[s] that issue[] health insurance coverage, or issuer[s] of a medicare supplemental policy.”[6] 23andMe does not fit into any of these categories and therefore operates outside the scope of HIPAA protections with respect to genetic information, leaving any genetic information it holds largely unregulated.

The Genetic Information Nondiscrimination Act (GINA), enacted in 2008, offers consumer protections by prohibiting discrimination based on an individual’s genetic information with respect to health insurance premium amounts or eligibility requirements for health insurance.[7] GINA also prohibits any deprivation of employment opportunities based on genetic information.[8] However, GINA’s protections do not extend to life insurance, disability insurance, or long-term care insurance.[9] This leaves a gap where genetic information may be used against individuals by entities not subject to GINA.

This regulatory gap is a major concern for consumers, especially with a potential bankruptcy sale looming. If 23andMe sells its assets, including its database of genetic information, the new owner would not have to adhere to the same privacy commitments made by 23andMe. For example, 23andMe promises not to use genetic information it receives for personalized or targeted marketing/advertising without a user’s express consent.[10] This policy likely reflects 23andMe’s efforts to comply with the California Privacy Rights Act (CPRA), which grants consumers the right to direct a business to not share or sell their personal information.[11] However, this right under the CPRA is an opt-out right—not an opt-in right—meaning consumers can stop a future sale of their information but by default there is no initial, regulatory limit on the sale of their personal information.[12] As a result, there’s nothing stopping 23andMe from altering its policies and changing how it uses genetic information. In fact, 23andMe’s Privacy Statement states it “may make changes to this Privacy Statement from time to time.”[13] Any such change would likely be binding if it is clearly communicated to users.[14] 23andMe currently lists email or an in-app notification as methods it may notify its users of any change to the Privacy Statement.[15] If it does so, it’s highly possible a court would view this as “clear communication” and there would be little legal recourse for users to prevent their genetic information from being used in ways they did not anticipate, such as for research or commercial purposes.

For example, say a life insurance company acquires an individual’s genetic data through the purchase of 23andMe’s assets. It could potentially use that data to make decisions about coverage or premiums, even though GINA prohibits health insurers to do the same.[16] This loophole highlights the dangers of having genetic information in the hands of entities not bound by strict privacy protections.

In the event of an acquisition or bankruptcy, 23andMe’s Privacy Statement outlines that personal information, including genetic data, may be among the assets sold or transferred to the new entity.[17] In such a case, the new owner could inherit both the data and the rights to use it under the existing terms, including the ability to modify how the data is used. This could result in uses not originally intended by the user so long as the change is communicated to the user.[18] This transfer clause highlights a key concern for users because it allows their deeply personal genetic data to be passed to another company without additional consent, potentially subjecting them to exploitation by organizations with different data usage policies or commercial interests. While 23andMe must notify users about any changes to the privacy statement or its use of genetic information, it does not specify whether the notice will be given in advance.[19] Any new entity could plan a change to the privacy statement terms–altering how it uses the genetic information while leaving users in the dark until the change is communicated to them, at which point the user’s information may have already been shared with third parties.

The potential 23andMe bankruptcy and sale of assets reveals deep flaws in the current regulatory system governing genetic data privacy. Without HIPAA protections, consumers risk their sensitive genetic information being sold or misused in ways they cannot control. GINA–while offering some protections–still leaves significant gaps, especially in life and disability insurance. As the demand for genetic testing continues to grow, the vulnerabilities exposed by 23andMe’s potential financial troubles highlight the urgent need for better privacy protections. Consumers must be made aware of the risks involved in sharing their genetic data, and regulatory measures must be strengthened to ensure this sensitive information is not misused or sold without their explicit consent.

 

Notes

[1] Independent Directors of 23andMe Resign from Board, 23andMe (Sept. 17, 2024) https://investors.23andme.com/news-releases/news-release-details/independent-directors-23andme-resign-board.

[2] Rolfe Winkler, 23andMe Board Resigns in New Blow to DNA-Testing Company, WALL ST. J. (Sept. 18, 2024) https://www.wsj.com/tech/biotech/23andme-board-resigns-in-new-blow-to-dna-testing-company-12f1a355.

[3] Anne Wojcicki (the last remaining board member) has consistently publicized her plan to take the company private, which is looming larger given the current state of the business financials. Id.

[4] See 42 U.S.C. § 1320d-9(a)(2).

[5] See generally 42 U.S.C. §1320d et seq.

[6] 42 U.S.C. § 1320d-9(a)(2).

[7] Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881.

[8] Id.

[9] Jessica D Tenenbaum & Kenneth W Goodman, Beyond the Genetic Information Nondiscrimination Act: Ethical and Economic Implications of the Exclusion of Disability, Long-term Care and Life Insurance, 14 Personalized Med. 153, 154 (2017).

[10] How We Use Your Information, 23andMe, https://www.23andme.com/legal/how-we-use-info/ (last visited Oct. 14, 2024).

[11] Cal. Civ. Code § 1798.120(a) (Deering 2024).

[12] Id.

[13] Privacy Statement, 23andMe (Sept. 24, 2024) https://www.23andme.com/legal/privacy/full-version/.

[14] See Lee v. Ticketmaster LLC, 817 Fed. App’x 393 (9th Cir. 2019)(upholding terms of use where notice was clearly given to the user, even if the user didn’t check a box to assent to the terms).

[15] Privacy Statement, supra note 13.

[16] See K.S.A. § 40-2259(c)-(d) (carving out the ability for life insurance policies to take into account genetic information when underwriting the policy).

[17] Privacy Statement, supra note 13.

[18] See Ticketmaster, 817 Fed. App’x 393 (2019).

[19] Privacy Statement, supra note 13.


You Can Protect Your Data . . . Once.

Jessica Schmitz, MJLST Staffer

We scan our face to access our phones. We scan our hands to save five minutes in the TSA line. Teslas track our eyes to ensure we’re watching the road.[1] Our biometric data is constantly being collected by private entities. Though states like California and Texas are attempting to implement new safeguards for its constituents, Illinois recently rolled back protections under its renowned Biometric Information Privacy Act (BIPA).[2] BIPA  protected consumers from private entities that deceptively or illegally collected biometric data.[3] The new rules overturned the Illinois Supreme Court ruling in Cothron v. White Castle System Inc. that allowed claims to accrue for each violation under BIPA’s provisions.[4] While tech companies and liability insurers are no doubt breathing a sigh of relief at the new reforms, litigants going forward may be left without a remedy if their biometric data is mishandled more than once. Below is a history of BIPA’s passing and impact, followed by the likely ramifications of the new reforms.

BIPA’s Passing Was an Early Victory for Data Privacy Protections

BIPA’s passing in 2008 was one of the earliest consumer protection laws for biometric data collection. At that time, major corporations were piloting finger scanning and facial recognition technology in major cities, including Chicago. The law was designed to not only provide recourse for consumers, but also prescribed preventative measures for companies to follow. BIPA’s protections are broad; companies must publish its data collection and retention policies to the public and cannot retain the information it collects for more than three years.[5] Companies must inform users that they are collecting the data, disclose what is being collected, disclose why it’s being collected, and for how long it intends to store the data.[6] Companies cannot disclose someone’s biometric data without express consent, nor can they profit from the data in any way.[7] Lastly, the data must be stored at least as well as a company stores other confidential data.[8]

Unlike laws in other states, BIPA provided a private right of action to enforce data privacy protections. Following its passage, swaths of lawsuits were filed against major corporations, including Amazon, Southwest Airlines, Google, and Facebook.[9] Under BIPA, companies could be liable for purchasing, improperly collecting, improperly storing, or disseminating biometric data, even if the data was not mishandled.[10] Plaintiffs could recover for every violation under BIPA, and could do so without stating an injury or alleging damages.[11] It is no surprise that BIPA class actions tended to favor plaintiffs, often resulting in large settlements or jury verdicts.[12] Since litigants could collect damages on every violation of BIPA’s provisions, it was difficult for companies to assess their potential liability. Every member of a class action could allege multiple violations, and if found liable, companies would owe, at minimum, $1,000 per violation. The lack of predictability often pushed corporate liability insurance policies into settling rather than risk such large payouts.

The 2023 ruling in Cothron implored the legislature to address concerns of disproportionate corporate liability, stating, “We respectfully suggest that the legislature . . . make clear its intent regarding the assessment of damages under the Act.”[13] The legislature rose to the challenge, fearing the court’s interpretation could bankrupt smaller or mid-size companies.[14] The new provisions to BIPA target the Court’s ruling, providing:

“For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.
(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. (eff. 8-2-24.)”

Though not left completely without redress, Illinois constituents may now recover only once if their biometric data is recklessly or deceptively collected or disseminated in the same manner.

BIPA Reforms Mark a Trend Towards Laxing Corporate Responsibility

The rollback of BIPA’s provisions come at a time when consumers need it most. The stakes for mishandling biometric data are much higher than that of other collected data. While social security numbers and credit card numbers can be canceled and changed – with varying degrees of ease – most constituents would be unwilling to change their faces and fingerprints for the sake of _____.[15] Ongoing and future technology developments, such as the rise of AI, heightens potential fallout from BIPA violations.  AI-generated deepfakes are becoming more prevalent, targeting both major celebrities like Taylor Swift and Pokimane, and our family members through phishing schemes.[16] These crimes rely on biometric data, utilizing our voices and faces to create realistic depictions of people, and can even recreate our speech cadence and body movements.[17] For victims, recovering on a per-person basis instead of a per-violation basis means they could be further harmed after recovering against a company with no redress.

Corporations, however, have been calling for reforms for year, and believe that these changes will reduce insurance premiums and docket burdens.[18] Prior to the changes, insurers began removing BIPA coverage from litigation insurance plans and adding strict requirements for defense coverage.[19] Insurers also would encourage companies to settle to avoid judgements on a per-violation basis.[20]

Advocates for BIPA reform believe the new changes will reduce insurance costs while still providing litigants with fair outcomes. Though individual litigants may only recover once, they can still recover for actual damages if a company’s actions resulted in more harm than simply violating BIPA’s provisions.  Awards on a per-person basis can still result in hefty settlements or awards that will hold companies accountable for wrongdoing. Instead of stifling corporate accountability, proponents believe the reforms will result in fairer settlements and reduce litigation costs overall.

Without further guidance from the legislature, how the new provisions are applied will be left for state and federal courts to interpret. Specifically, the legislature left one looming question unanswered; do the restrictions apply retroactively? If litigants can only recover from an entity once, are past litigants barred from participating in future actions regarding similar violations? Or do they get one last shot at holding companies accountable? If they lost in a prior suit, can they join a new one? In trying to relieve the court system, the legislature has ironically given courts the loathsome task of interpreting BIPA’s vague new provisions. Litigants and defendants will likely fight tooth and nail to create favorable case law, which is unlikely to be uniform across jurisdictions.

 

Notes

[1] Model Y Owner’s Manual: Cabin Camera, Tesla, https://www.tesla.com/ownersmanual/modely/en_us/GUID-EDAD116F-3C73-40FA-A861-68112FF7961F.html (last visited Sept. 16, 2024).

[2] See generally, California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 (West 2018); Capture or Use of Biometric Identifier, Tex. Code Ann. § 503.001 (2017); Abraham Gross, Illinois Biometric Privacy Reform Eases Coverage Woes, LexisNexis Law360 (Aug. 8, 2024, 7:13 PM), https://plus.lexis.com/newsstand/law360-insurance-authority/article/1868014/?crid=debb3ba9-22a1-41d6-920e-c1ce2b7a108d&cbc=0,0,0.

[3] Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14/5 (2024) [hereinafter BIPA].

[4] Cothron v. White Castle System, Inc., 216 N.E.3d 918, 924 (Ill. 2023).

[5] BIPA, supra note 3, at sec. 15a.

[6] Id. at sec. 15b.

[7] Id. at sec. 15c-d.

[8] Id. at sec. 15e.

[9] See generally, In re Facebook Biometric Info. Priv. Litig., No. 3:15-CV-03747-JD, 2018 WL 2197546 (N.D. Cal. May 14, 2018); Rivera v. Google Inc., 238 F.Supp.3d 1088 (N.D.Ill., 2017); Miller v. S.W. Airlines Co., No. 18 C 86, 2018 WL 4030590 (N.D. Ill. Aug. 23, 2018), aff’d, 926 F.3d 898 (7th Cir. 2019).

[10] BIPA, supra note 3, at sec. 15.

[11] Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197, 1206 (Ill. 2019).

[12] See, Lauraann Wood, $9M White Castle Fingerprint BIPA Deal Clears Final Approval, LexisNexis Law360 (Aug. 1, 2024, 2:18 PM) https://www.law360.com/articles/1864687?from_lnh=true; Lauraann Wood, BNSF’s $75M BIPA Deal With Truckers Nears Final OK, LexisNexis Law360 (June 17, 2024, 8:54 AM) https://www.law360.com/articles/1848754?from_lnh=true.

[13] Cothron, 216 N.E.3d at 929 (Ill. 2023).

[14] Updates to Illinois’ Biometric Privacy Signed Into Law Thanks to Cunningham, Office of Bill Cunningham: State Senator, https://www.senatorbillcunningham.com/news/508-updates-to-illinois-biometric-privacy-signed-into-law-thanks-to-cunningham (Aug. 2, 2024, 3:13PM).

[15] See, BIPA, supra note 3, at sec. 5c.

[16] Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement Is His Latest Embrace Of AI-Generated Images, AP News (Aug. 20, 2024, 3:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f; Bianca Britton, They Appeared in Deepfake Porn Videos Without Their Consent. Few Laws Protect Them, NBC News (Feb. 14, 2023, 2:48 PM), https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker (Mar. 7, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[17] Catherine Bernaciak & Dominic A. Ross, How Easy is it to Make and Detect a Deepfake?, Carnegie Mellon Univ.: SEI Blog (Mar. 14, 2022), https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/.

[18] Michael C. Andolina et. al., Emerging Issues and Ambiguities Under Illinois’ Biometric Information Privacy Act, Practitioner Insights Commentaries (May 21, 2020), https://1.next.westlaw.com/Document/Ib04759309b7b11eabea3f0dc9fb69570/View/FullText.html?listSource=Foldering&originationContext=clientid&transitionType=MyResearchHistoryItem&contextData=%28oc.Default%29&VR=3.0&RS=cblt1.0.

[19] Gross, supra note 2.

[20] Id.


AR/VR/XR: Breaking the Wall of Legal Issues Used to Limit in Either the Real-World or the Virtual-World

Sophia Yao, MJLST Staffer

From Pokémon Go to the Metaverse,[1] VR headsets to XR glasses, vision technology is quickly changing our lives in many aspects. The best-known companies or groups that have joined this market include Apple’s Vision Products Group (VPG), Meta’s Reality Lab, Microsoft, and others. Especially after Apple published its Vision Pro in 2023, no one doubts that this technology will soon be a vital driver for both tech and business. Regardless of why, can this type of technology significantly impact human genes? What industries will be impacted by this technology? And what kinds of legal risks are to come?

Augmented Reality (“AR”) refers to a display of a real-world environment whose elements are augmented by (i.e., overlaid with) one or more layers of text, data, symbols, images, or other graphical display elements.[2] Virtual Reality (“VR”) is using a kind of device (e.g., headsets or multi-projected environments) to create a simulated and immersive environment that can provide an experience either similar to or completely different from the real world,[3] while Mixed Reality/Extended Reality (XR) glasses are relatively compact and sleek, and weigh much less than VR headsets.[4] XR’s most distinguished quality from VR is that individuals can still see the world around them with XR by projecting a translucent screen on top of the real world. Seemingly, the differences between these three vision technologies may soon be eliminated with the possibility of their combination into once device.

Typically, vision technology assists people in mentally processing 2-D information into a 3-D world by integrating digital information directly into real objects or environments. This can improve individuals’ ability to absorb information, make decisions, and execute required tasks quickly, efficiently, and accurately. However, many people report feeling nauseous after using such products, ear pain, and a disconnect between their eyes and body.[5] Even experts who use AR/VR products in emerging psychotherapy treatments admit that there have been adverse effects in AR/VR trials due to mismatching the direct contradiction between the visual system and the motion system.[6] Researchers also discovered that it affects the way people behave in social situations due to feeling less socially connected to others.[7]

In 2022, the global augmented reality market was valued at nearly $32 billion and is projected to reach $88 billion by 2026.[8] As indicated by industry specialists and examiners, outside of gaming, a significant portion of vision technology income will accumulate from e-commerce and retail (fashion and beauty), manufacturing, the education industry, healthcare, real estate, and e-sports, which will further impact entertainment, cost of living, and innovation.[9] To manage this tremendous opportunity, it is crucial to understand potential legal risks and develop a comprehensive legal strategy to address these upcoming challenges.

To expand one’s business model, it is important to maximize the protection of intellectual property (IP), including virtual worlds, characters, and experiences. Doing so also aligns with contractual concerns, service remedies, and liability for infringement of third-party IP. For example, when filing an IP prosecution, it is difficult to argue that the hardware-executing invention (characters or data information) is a unique machine, and that the designated steps performed by the hardware are special under MPEP § 2106.05(d).[10] Furthermore, the Federal Circuit has cautioned the abstraction of inventions – that “[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas…[T]read carefully in constructing this exclusionary principle lest it swallows all of the patent law.”[11]

From a consumer perspective, legal concerns may include data privacy, harassment, virtual trespass, or even violent attacks due to the aforementioned disconnect between individuals’ eyes and bodies. Courts’ views on virtual trespass created by vision technology devices is ambiguous. It is also unclear whether courts will accept the defense of error in judgment due to the adverse effects of using AR/VR devices. One of the most significant concerns is the protection of the younger generations, since they are often the target consumers and those who are spending the most time using these devices. Experts have raised concerns about the adverse effects of using AR/VR devices, questioning whether they negatively impact the mental and physical health of younger generations. Another concern is that these individuals may experience a decline in social communication skills and feel a stronger connection to machines rather than to human beings. Many other legal risks are hanging around the use of AR/VR devices, such as private data collection without consent by constantly scanning the users’ surrounding circumstances, although some contend that the Children’s Online Privacy Protection Act (COPPA) prohibits the collection of personally identifiable information if an operator believes a user to be under the age of thirteen.[12]

According to research trends, combining AR, VR, and MR/XR will allow users to transcend distance, time, and scale, to bring people together in shared virtual environments, enhance comprehension, communication, and decisionmaking efficiency. Once the boundaries between the real-world and virtual-world are eliminated, AR/VR devices will “perfectly” integrate with the physical world, whether or not we are prepared for this upcoming world.

Notes

[1] Eric Ravenscraft, What is the Meteverse, Exactly?, Wired (Jun. 15, 2023, 6:04 PM), https://www.wired.com/story/what-is-the-metaverse/.

[2] Travis Alley, ARTICLE: Pokemon Go: Emerging Liability Arising from Virtual Trespass for Augmented Reality Applications, 4 Tex. A&M J. Prop. L. 273 (2018).

[3] Law Offices of Salar Atrizadeh, Virtual and Augmented Reality Laws, Internet Law. Blog (Dec. 17, 2018), https://www.internetlawyer-blog.com/virtual-and-augmented-reality-laws/.

[4] Simon Hill, Review: Viture One XR Glasses, Wired (Sep. 1, 2023, 7:00 AM), https://www.wired.com/review/viture-one-xr-glasses/.

[5] Alexis Souchet, Virtual Reality has Negative Side Effects—New Research Shows That Can be a Problem in the Workplace, The Conversation (Aug. 8, 2023, 8:29 AM), https://theconversation.com/virtual-reality-has-negative-side-effects-new-research-shows-that-can-be-a-problem-in-the-workplace-210532#:~:text=Some%20negative%20symptoms%20of%20VR,nausea%20and%20increased%20muscle%20fatigue.

[6] John Torous et al., Adverse Effects of Virtual and Augmented Reality Interventions in Psychiatry: Systematic Review, JMIR Ment Health (May 5, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199391/.

[7] How Augmented Reality Affects People’s Behavior, Sci.Daily (May 22, 2019), https://www.sciencedaily.com/releases/2019/05/190522101944.htm.

[8] Augmented Reality (AR) Market by Device Type (Head-mounted Display, Head-up Display), Offering (Hardware, Software), Application (Consumer, Commercial, Healthcare), Technology, and Geography – Global Forecast, Mkt. and Mkt., https://www.marketsandmarkets.com/Market-Reports/augmented-reality-market-82758548.html.

[9] Hill, supra note 4.

[10] Manual of Patent Examining Proc. (MPEP) § 2106.05(d) (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_13d41_124 (explaining an evaluation standard on when determining whether a claim recites significantly more than a judicial exception depends on whether the additional elements(s) are well-understood, routine, conventional activities previously known to the industry).

[11] Manual of Patent Examining Proc. (MPEP) § 2106.04 (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_139db_e0; see also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (2016).

[12] 16 CFR pt. 312.


The Double-Helix Dilemma: Navigating Privacy Pitfalls in Direct-to-Consumer Genetic Testing

Ethan Wold, MJLST Staffer

Introduction

On October 22, direct-to-consumer genetic testing (DTC-GT) company 23andME sent emails to a number of its customers informing them of a data breach into the company’s “DNA Relatives” feature that allows customers to compare ancestry information with other users worldwide.[1] While 23andMe and other similar DTC-GT companies offer a number of positive benefits to consumers, such as testing for health predispositions and carrier statuses of certain genes, this latest data breach is a reminder that before choosing to opt into these sorts of services one should be aware of the potential risks that they present.

Background

DTC-GT companies such as 23andMe and Ancestry.com have proliferated and blossomed in recent years. It is estimated over 100 million people have utilized some form of direct-to-consumer genetic testing.[2] Using biospecimens submitted by consumers, these companies sequence and analyze an individual’s genetic information to provide a range of services pertaining to one’s health and ancestry.[3] The October 22 data breach specifically pertained to 23andMe’s “DNA Relatives” feature.[4] The DNA Relatives feature can identify relatives on any branch of one’s family tree by taking advantage of the autosomal chromosomes, the 22 chromosomes that are passed down from your ancestors on both sides of your family, and one’s X chromosome(s).[5] Relatives are identified by comparing the customer’s submitted DNA with the DNA of other 23andMe members who are participating in the DNA Relatives feature.[6] When two people are found to have an identical DNA segment, it is likely they share a recent common ancestor.[7] The DNA Relatives feature even uses the length and number of these identical segments to attempt to predict the relationship between genetic relatives.[8] Given the sensitive nature of sharing genetic information, there are often privacy concerns regarding practices such as the DNA Relatives feature. Yet despite this, the legislation and regulations surrounding DTC-GT is somewhat limited.

Legislation

The Health Insurance Portability and Accountability Act (HIPAA) provides the baseline privacy and data security rules for the healthcare industry.[9] HIPAA’s Privacy Rule regulates the use and disclosure of a person’s “protected health information” by a “covered entity.[10] Under the Act, the type of genetic information collected by 23andMe and other DTC-GT companies does constitute “protected health information.”[11] However, because HIPAA defines a “covered entity” as a health plan, healthcare clearinghouse, or health-care provider, DTC-GT companies do not constitute covered entities and therefore are not under the umbrella of HIPAA’s Privacy Rule.[12]

Thus, the primary source of regulation for DTC-GT companies appears to be the Genetic Information Nondiscrimination Act (GINA). GINA was enacted in 2008 for the purpose of protecting the public from genetic discrimination and alleviating concerns about such discrimination and thereby encouraging individuals to take advantage of genetic testing, technologies, research, and new therapies.[13] GINA defines genetic information as information from genetic tests of an individual or family members and includes information from genetic services or genetic research.[14] Therefore, DTC-GT companies fall under GINA’s jurisdiction. However, GINA only applies to the employment and health insurance industries and thus neglects many other potential arenas where privacy concerns may present.[15] This is especially relevant for 23andMe customers, as signing up for the service serves as consent for the company to use and share your genetic information with their associated third-party providers.[16] As a case in point, in 2018 the pharmaceutical giant GlaxoSmithKline purchased a $300 million stake in 23andMe for the purpose of gaining access to the company’s trove of genetic information for use in their drug development trials.[17]

Executive Regulation

In addition to the legislation above, three different federal administrative agencies primarily regulate the DTC-GT industry: the Food and Drug Administration (FDA), the Centers of Medicare and Medicaid services (CMS), and the Federal Trade Commission (FTC). The FDA has jurisdiction over DTC-GT companies due to the genetic tests they use being labeled as “medical devices”[18] and in 2013 exercised this authority over 23andMe by sending a letter to the company resulting in the suspending of one of its health-related genetic tests.[19] However, the FDA only has jurisdiction over diagnostic tests and therefore does not regulate any of the DTC-GT services related to genealogy such as 23andMe’s DNA Relatives feature.[20] Moreover, the FDA does not have jurisdiction to regulate the other aspects of DTC-GT companies’ activities or data practices.[21] CMS has the ability to regulate DTC-GT companies through enforcement of the Clinical Laboratory Improvements Act (CLIA), which requires that genetic testing laboratories ensure the accuracy, precision, and analytical validity of their tests.[22] But, like the FDA, CMS only has jurisdiction over tests that diagnose a disease or assess health.[23]

Lastly, the FTC has broad authority to regulate unfair or deceptive business practices under the Federal Trade Commission Act (FTCA) and has levied this authority against DTC-GT companies in the past. For example, in 2014 the agency brought an action against two DTC-GT companies who were using genetic tests to match consumers to their nutritional supplements and skincare products.[24] The FTC alleged that the companies’ practices related to data security were unfair and deceptive because they failed to implement reasonable policies and procedures to protect consumers’ personal information and created unnecessary risks to the personal information of nearly 30,000 consumers.[25] This resulted in the companies entering into an agreement with the FTC whereby they agreed to establish and maintain comprehensive data security programs and submit to yearly security audits by independent auditors.[26]

Potential Harms

As the above passages illustrate, the federal government appears to recognize and has at least attempted to mitigate privacy concerns associated with DTC-GT. Additionally, a number of states have passed their own laws that limit DTC-GT in certain aspects.[27] Nevertheless, given the potential magnitude and severity of harm associated with DTC-GT it makes one question if it is enough. Data breaches involving health-related data are growing in frequency and now account for 40% of all reported data breaches.[28] These data breaches result in unauthorized access to DTC-GT consumer-submitted data and can result in a violation of an individual’s genetic privacy. Though GINA aims to prevent it, genetic discrimination in the form of increasing health insurance premiums or denial of coverage by insurance companies due to genetic predispositions remains one of the leading concerns associated with these violations. What’s more, by obtaining genetic information from DTC-GT databases, it is possible for someone to recover a consumer’s surname and combine that with other metadata such as age and state to identify the specific consumer.[29] This may in turn lead to identity theft in the form of opening accounts, taking out loans, or making purchases in your name, potentially damaging your financial well-being and credit score. Dealing with the aftermath of a genetic data breach can also be expensive. You may incur legal fees, credit monitoring costs, or other financial burdens in an attempt to mitigate the damage.

Conclusion

As it sits now, genetic information submitted to DTC-GT companies already contains a significant volume of consequential information. As technology continues to develop and research presses forward, the volume and utility of this information will only grow over time. Thus, it is crucially important to be aware of risks associated with DTC-GT services.

This discussion is not intended to discourage individuals from participating in DTC-GT. These companies and the services they offer provide a host of benefits, such as allowing consumers to access genetic testing without the healthcare system acting as a gatekeeper, thus providing more autonomy and often at a lower price.[30] Furthermore, the information provided can empower consumers to mitigate the risks of certain diseases, allow for more informed family planning, or gain a better understanding of their heritage.[31] DTC-GT has revolutionized the way individuals access and understand their genetic information. However, this accessibility and convenience comes with a host of advantages and disadvantages that must be carefully considered.

Notes

[1] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[2] https://www.ama-assn.org/delivering-care/patient-support-advocacy/protect-sensitive-individual-data-risk-dtc-genetic-tests#:~:text=Use%20of%20direct%2Dto%2Dconsumer,November%202021%20AMA%20Special%20Meeting

[3] https://go-gale-com.ezp3.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[4] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[5] https://customercare.23andme.com/hc/en-us/articles/115004659068-DNA-Relatives-The-Genetic-Relative-Basics

[6] Id.

[7] Id.

[8] Id.

[9] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[10] https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/administrative/combined/hipaa-simplification-201303.pdf

[11] Id.

[12] Id; https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[13] https://www.eeoc.gov/statutes/genetic-information-nondiscrimination-act-2008

[14] Id.

[15] https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3035561&blobtype=pdf

[16] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[17] https://news.yahoo.com/news/major-drug-company-now-access-194758309.html

[18] https://uscode.house.gov/view.xhtml?req=(title:21%20section:321%20edition:prelim)

[19] https://core.ac.uk/download/pdf/33135586.pdf

[20] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[21] Id.

[22] https://www.law.cornell.edu/cfr/text/42/493.1253

[23] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[24] https://www.ftc.gov/system/files/documents/cases/140512genelinkcmpt.pdf

[25] Id.

[26] Id.

[27] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[28] Id.

[29] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[30] Id.

[31] Id.


The Policy Future for Telehealth After the Pandemic

Jack Atterberry, MJLST Staffer

The Pandemic Accelerated Telehealth Utilization

Before the Covid-19 pandemic began, telehealth usage in the United States healthcare system was insignificant (rounding to 0%) as a percentage of total outpatient care visits.[1] In the two years after the beginning of the pandemic, telehealth usage soared to over 10% of outpatient visits and has been widely used across all payer categories including Medicare and Medicaid.[2] The social distancing realities during the pandemic years coupled with federal policy measures allowed for this radical transition toward telehealth care visits.

In response to the onset of Covid-19, the US federal government relaxed and modified many telehealth regulations which have expanded the permissible access of telehealth care services. After a public health emergency was declared in early 2020, the Center for Medicare & Medicaid Services (CMS) and the Department of Health and Human Services (HHS) modified preexisting telehealth-related regulations to expand the permissible use of those services.  Specifically, CMS temporarily expanded Medicare coverage to include telehealth services without the need for in-person visits, removed telehealth practice restrictions such as expanding the type of providers that could provide telehealth, and increased the reimbursement rates for telehealth services to bring them closer to in-person visit rates.[3] In addition, HHS implemented modifications such as greater HIPAA flexibility by easing requirements around using popular communication platforms such as Zoom, Skype, and FaceTime provided that they are used in good faith.[4]  Collectively, these changes helped lead to a significant rise in telehealth services and expanded access to care for many people that otherwise would not receive healthcare.  Unfortunately, many of these telehealth policy provisions are set to expire in 2024, leaving open the question of whether the benefits of telehealth care expansion will be here to stay after the public emergency measures end.[5]

Issues with Telehealth Care Delivery Between States

A big legal impediment to telehealth expansion in the US is the complex interplay of state and federal laws and regulations impacting telehealth care delivery. At the state level, key state differences in the following areas have historically held back the expansion of telehealth.  First, licensing and credentialing requirements for healthcare providers are most often licensed at the state level – this has created a barrier for providers who want to offer telehealth services across state lines. While many states have implemented temporary waivers or joined interstate medical licensure compacts to address this issue during the pandemic, many states have not done so and huge inconsistencies exist. Besides these issues, states also differ with regard to reimbursement policy as states differ significantly in how different payer types insure differently in different regions—this has led to confusion for providers about whether to deliver care in certain states for fear of not getting reimbursed adequately. Although the federal health emergency helped ease interstate telehealth restrictions since the pandemic started, these challenges will likely persist after the temporary telehealth measures are lifted at the end of 2024.

What the pandemic-era temporary easing of telehealth restrictions taught us is that interstate telehealth improves health outcomes, increases patient satisfaction, and decreases gaps in care delivery.  In particular, rural communities and other underserved areas with relatively fewer healthcare providers benefited greatly from the ability to receive care from an out of state provider.  For example, patients in states like Montana, North Dakota, and South Dakota benefit immensely from being able to talk with an out of state mental health provider because of the severe shortages of psychiatrists, psychologists, and other mental health practitioners in those states.[6]  In addition, a 2021 study by the Bipartisan Policy Center highlighted that patients in states which joined interstate licensure compacts experienced a noticeable improvement in care experience and healthcare workforces experienced a decreased burden on their chronically stressed providers.[7]  These positive outcomes resulting from eased interstate healthcare regulations should inform telehealth policy moving forward.

Policy Bottlenecks to Telehealth Care Access Expansion

The presence of telehealth in American healthcare is surprisingly uncertain as the US emerges from the pandemic years.  As the public health emergency measures which removed various legal and regulatory barriers to accessing telehealth expire next year, many Americans could be left without access to healthcare via telehealth services. To ensure that telehealth remains a part of American healthcare moving forward, federal and state policy makers will need to act to bring about long term certainty in the telehealth regulatory framework.  In particular, advocacy groups such as the American Telehealth Association recommend that policy makers focus on key policy changes such as removing licensing barriers to interstate telehealth care, modernizing reimbursement payment structures to align with value-based payment principles, and permanently adopting pandemic-era telehealth access for Medicare, Federally Qualified Health Centers, and Rural Health Clinics.[8]  In addition, another valuable federal regulatory policy change would be to continue allowing the prescription of controlled substances without an in-person visit.  This would entail modifying the Ryan Haight Act, which requires an in-person medical exam before prescribing controlled substances.[9]  Like any healthcare reform in the US, cementing these lasting telehealth policy changes as law will be a major uphill battle.  Nonetheless, expanding access to telehealth could be a bipartisan policy opportunity for lawmakers as it would bring about expanded access to care and help drive the transition toward value-based care leading to better health outcomes for patients.

Notes

[1] https://www.healthsystemtracker.org/brief/outpatient-telehealth-use-soared-early-in-the-covid-19-pandemic-but-has-since-receded/

[2] https://www.cms.gov/newsroom/press-releases/new-hhs-study-shows-63-fold-increase-medicare-telehealth-utilization-during-pandemic#:~:text=Taken%20as%20a%20whole%2C%20the,Island%2C%20New%20Hampshire%20and%20Connecticut.

[3] https://telehealth.hhs.gov/providers/policy-changes-during-the-covid-19-public-health-emergency

[4] Id.

[5] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[6] https://thinkbiggerdogood.org/enhancing-the-capacity-of-the-mental-health-and-addiction-workforce-a-framework/?_cldee=anVsaWFkaGFycmlzQGdtYWlsLmNvbQ%3d%3d&recipientid=contact-ddf72678e25aeb11988700155d3b3c69-e949ac3beff94a799393fb4e9bbe3757&utm_source=ClickDimensions&utm_medium=email&utm_campaign=Health%20%7C%20Mental%20Health%20Access%20%7C%2010.19.21&esid=e4588cef-7520-ec11-b6e6-002248246368

[7] https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2021/11/BPC-Health-Licensure-Brief_WEB.pdf

[8] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[9] https://www.aafp.org/pubs/fpm/issues/2021/0500/p9.html


Perhaps Big Tech Regulation Belongs on Congress’s for You Page

Kira Le, MJLST Staffer

On Thursday, March 23, 2023, TikTok CEO Shou Zi Chew testified before a Congressional panel for five hours in order to convince Congress that the social media platform should not be banned in the United States. The hearing came one week after reports surfaced that the Committee on Foreign Investment was threatening a ban unless TikTok’s parent company ByteDance sells its share of the company.[1] Lawmakers on both sides of the aisle, as well as FBI officials, are allegedly concerned with the possibility of the Chinese government manipulating users’ experience on the platform or threatening the security of the data of its more than 150 million users in the United States.[2] Despite Chew’s testimony that TikTok plans to contract with U.S. tech giant Oracle to store U.S. data on U.S. servers on U.S. soil, preventing Chinese interference on the platform and recommending content to U.S. users through Oracle infrastructure, lawmakers were not convinced, and not a single one offered support for TikTok.[3]

In terms of what’s to come for TikTok’s future in the United States, Senator Marco Rubio updated his website on Monday, March 27, 2023 with information on “when TikTok will be banned,” claiming his proposed ANTI-SOCIAL CCP Act is the only bipartisan, bicameral legislation that would actually prevent TikTok from operating in the United States.[4] In order to cut off the platform’s access to critical functions needed to remain online, the proposed statute would require the president to use the International Emergency Economic Powers Act to block and prohibit all transactions with TikTok, ByteDance, and any subsidiary or successor within 30 days.[5] Senator Rubio explains that the proposed legislation “requires the president to block and prohibit transactions with social media companies owned or otherwise controlled by countries or entities of concern.”[6] Reuters reports that The White House supports the Senate bill known as the RESTRICT Act.[7] However, former President Trump made an almost identical attempt to ban the app in 2020.[8]TikTok was successful in quashing the effort, and would almost certainly challenge any future attempts.[9] Further, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, “To justify a TikTok ban, the government would have to demonstrate that privacy and security concerns can’t be addressed in narrower ways. The government hasn’t demonstrated this, and we doubt it could. Restricting access to a speech platform that is used by millions of Americans every day would set a dangerous precedent for regulating our digital public sphere more broadly.”[10]

Despite what Congress may want the public to think, it certainly has other options for protecting Americans and their data from Big Tech companies like TikTok. For example, nothing is stopping U.S. lawmakers from following in the footsteps of the European Parliament, which passed the Digital Markets Act just last year.[11] Although the main purpose of the Act is to limit anticompetitive conduct by large technology companies, it includes several provisions on protecting the personal data of users of defined “gatekeeper” firms. Under the Act, a gatekeeper is a company that provides services such as online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communications services; operating systems; web browsers; and online advertising services that are gateways for business to reach end users.[12] The Digital Markets Act forbids these gatekeepers from processing the personal data of end users for the purpose of providing online advertisement services, combining or cross-using their personal data, or signing users into other services in order to combine their personal data without their explicit consent.[13]

The penalties associated with violations of the Act give it some serious teeth. For noncompliance, the European Commission may impose a fine of up to 10% of the offending gatekeeper’s total worldwide turnover in the preceding year in the first instance, and up to 20% if the gatekeeper has committed the same or a similar infringement laid out in specific articles at some point in the eight preceding years.[14] For any company, not limited to gatekeepers, the Commission may impose a fine of up to 1% of total worldwide turnover in the preceding year for failing to provide the Commission with information as required by various articles in the Act. Finally, in order to compel any company to comply with specific decisions of the Commission and other articles in the regulation, the Commission may impose period penalty payments of up to 5% of the average daily worldwide turnover in the preceding year, per day.[15]

If U.S. lawmakers who have backed bipartisan legislation giving President Biden a path to ban TikTok are truly concerned about preventing the spread of misinformation on the platform, who truly believe, as Representative Gus Bilirakis claims to, that it is “literally leading to death” and that “[w]e must save our children from big tech companies” who allow harmful content to be viewed and spread without regulation, then perhaps Congress should simply: regulate it.[16] After the grueling congressional hearing, the Chinese foreign ministry stated in a regular news briefing that it has never asked companies “to collect or provide data from abroad to the Chinese government in a way that violated local laws…”[17]During his testimony, Chew also argued that TikTok is no different than other social media giants, and has even sought to put stronger safeguards in place as compared to its competitors.[18] Granted, some lawmakers have expressed support for comprehensive data privacy legislation that would apply to all tech companies.[19] Perhaps it would be more fruitful for U.S. lawmakers to focus on doing so.

Notes

[1] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[2] Id.

[3] Id.; David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[4] FAQ: When Will TikTok Be Banned?, MARCO RUBIO US SENATOR FOR FLORIDA (Mar. 27, 2023), https://www.rubio.senate.gov/public/index.cfm/press-releases?ContentRecord_id=C5313B3F-8173-4DC8-B1D9-9566F3E2595C.

[5] Id.

[6] Id.

[7] Factbox: Why a Broad US TikTok Ban is Unlikely to Take Effect Soon, REUTERS (Mar. 23, 2023), https://www.reuters.com/technology/why-broad-us-tiktok-ban-is-unlikely-take-effect-soon-2023-03-23/.

[8] Id.

[9] Id.

[10] Id.

[11] Council Regulation (EU) 2022/1925 on Contestable and Fair Markets in the Digital Sector, 2022 O.J. L 265/1 [hereinafter Digital Markets Act].

[12] Id., Art. 3, 2022 O.J. L 265/28, 30.

[13] Id. art. 5, at 33.

[14] Id. art. 30, at 51, 52.

[15] Id. art. 17, at 44.

[16] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[17] David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[18] Daniel Flatley, Five Key Moments From TikTok CEO’s Combative Hearing in Congress, BLOOMBERG (Mar. 23, 2023), https://www.bloomberg.com/news/articles/2023-03-23/five-key-moments-from-tiktok-ceo-s-combative-hearing-in-congress#xj4y7vzkg.

[19] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.


The Future of Neurotechnology: Brain Healing or Brain Hacking?

Gordon Unzen, MJLST Staffer

Brain control and mindreading are no longer ideas confined to the realm of science fiction—such possibilities are now the focus of science in the field of neurotechnology. At the forefront of the neurotechnology revolution is Neuralink, a medical device company owned by Elon Musk. Musk envisions that his device will allow communication with a computer via the brain, restore mobility to the paralyzed and sight to the blind, create mechanisms by which memories can be saved and replayed, give rise to abilities like telepathy, and even transform humans into cyborgs to combat sentient artificial intelligence (AI) machines.[1]

Both theoretical and current applications of brain-interfacing devices, however, raise concerns about infringements upon privacy and freedom of thought, with the technology providing intimate information ripe for exploitation by governments and private companies.[2] Now is the time to consider how to address the ethical issues raised by neurotechnology so that people may responsibly enjoy its benefits.

What is Neurotechnology?

Neurotechnology describes the use of technology to understand the brain and its processes, with goals to control, repair, or improve brain functioning.[3] Neurotechnology research uses techniques that record brain activity such as functional magnetic resonance imaging (fMRI), and that stimulate the brain such as transcranial electrical stimulation (tES).[4] Both research practices and neurotechnological devices can be categorized as invasive, wherein electrodes are surgically implanted in the brain, or non-invasive, which do not require surgery.[5] Neurotechnology research is still in its infancy but development rates will likely continue accelerating with the use of increasingly advanced AI to help make sense of the data.[6]

Work in neurotechnology has already led to the proliferation of applications impacting fields from medicine to policing. Bioresorbable electronic medication speeds up nerve regeneration, deep brain stimulators function as brain pacemakers targeting symptoms of diseases like Parkinson’s, and neurofeedback visualizes brain activity for the real-time treatment of mental illnesses like depression.[7] Recently, a neurotechnological device that stimulates the spinal cord allowed a stroke patient to regain control of her arm.[8]  Electroencephalogram (EEG) headsets are used by gamers as a video game controller and by transportation services to track when a truck driver is losing focus.[9] In China, the government uses caps to scan employees’ brainwaves for signs of anxiety, rage, or fatigue.[10] “Brain-fingerprinting” technology, which analyzes whether a subject recognizes a given stimulus, has been used by India’s police since 2003 to ‘interrogate’ a suspect’s brain, although there are questions regarding the scientific validity of the practice.[11]

Current research enterprises in neurotechnology aim to push the possibilities much further. Mark Zuckerberg’s Meta financed invasive neurotechnology research using an algorithm that decoded subject’s answers to simple questions from brain activity with a 61% accuracy.[12] The long-term goal is to allow everyone to control their digital devices through thought alone.[13] Musk similarly aims to begin human trials for Neuralink devices designed to help paralyzed individuals communicate without the need for typing, and he hopes this work will eventually allow Neuralink to fully restore their mobility.[14] However, Musk has hit a roadblock in failing to acquire FDA approval for human-testing, despite claiming that Neuralink devices are safe enough that he would consider using them on his children.[15] Others expect that neurofeedback will eventually see mainstream deployment through devices akin to a fitness tracker, allowing people to constantly monitor their brain health metrics.[16]

Ethical Concerns and Neurorights

Despite the possible medical and societal benefits of neurotechnology, it would be dangerous to ignore the ethical red flags raised by devices that can observe and impose on brain functioning. In a world of increasing surveillance, the last bastion of privacy and freedom exists in the brain. This sanctuary is lost when even the brain is subject to data collection practices. Neurotechnology may expose people to dystopian thought policing and hijacking, but more subtly, could lead to widespread adverse psychological consequences as people live in constant fear of their thoughts being made public.[17]

Particularly worrisome is how current government and business practices inform the likely near-future use of data collected by neurotechnology. In law enforcement contexts such as interrogations, neurotechnology could allow the government to cause people to self-incriminate in violation of the Fifth Amendment. Private companies that collect brain data may be required to turn it over to governments, analogous to the use of Fitbit data as evidence in court.[18] If the data do not go to the government, companies may instead sell them to advertisers.[19] Even positive implementations can be taken too far. EEG headsets that allow companies to track the brain activity of transportation employees may be socially desirable, but the widespread monitoring of all employees for productivity is a plausible and sinister next step.

In light of these concerns, ethicist and lawyer Nita Farahany argues for updating human rights law to protect cognitive privacy and liberty.[20] Farahany describes a right of self-determination regarding neurotechnology to secure freedom from interference, to access the technology if desired, and to change one’s own brain by choice.[21] This libertarian perspective acknowledges the benefits of neurotechnology for which many may be willing to sacrifice privacy, while also ensuring that people have an opportunity to say no its imposition. Others take a more paternalistic approach, questioning whether further regulation is needed to limit possible neurotechnology applications. Sigal Samuel notes that cognitive-enhancing tools may create competition that requires people to either use the technology or get left behind.[22] Decisions to engage with neurotechnology thus will not be made with the freedom Farahany imagines.

Conclusion

Neurotechnology holds great promise for augmenting the human experience. The technology will likely play an increasingly significant role in treating physical disabilities and mental illnesses. In the near future, we will see the continued integration of thought as a method to control technology. We may also gain access to devices offering new cognitive abilities from better memory to telepathy. However, using this technology will require people to give up extremely private information about their brain functions to governments and companies. Regulation, whether it takes the form of a revamped notion of human rights or paternalistic lawmaking limiting the technology, is required to navigate the ethical issues raised by neurotechnology. Now is the time to act to protect privacy and liberty.

[1] Rachel Levy & Marisa Taylor, U.S. Regulators Rejected Elon Musk’s Bid to Test Brain Chips in Humans, Citing Safety Risks, Reuters (Mar. 2, 2023), https://www.reuters.com/investigates/special-report/neuralink-musk-fda/.

[2] Sigal Samuel, Your Brain May Not be Private Much Longer, Vox (Mar. 17, 2023), https://www.vox.com/future-perfect/2023/3/17/23638325/neurotechnology-ethics-neurofeedback-brain-stimulation-nita-farahany.

[3] Neurotechnology, How to Reveal the Secrets of the Human Brain?, Iberdrola,https://www.iberdrola.com/innovation/neurotechnology#:~:text=Neurotechnology%20uses%20different%20techniques%20to,implantation%20of%20electrodes%20through%20surgery(last accessed Mar. 19, 2023).

[4] Id.

[5] Id.

[6] Margaretta Colangelo, how AI is Advancing NeuroTech, Forbes (Feb. 12, 2020), https://www.forbes.com/sites/cognitiveworld/2020/02/12/how-ai-is-advancing-neurotech/?sh=277472010ab5.

[7] Advances in Neurotechnology Poised to Impact Life and Health Insurance, RGA (July 19, 2022), https://www.rgare.com/knowledge-center/media/research/advances-in-neurotechnology-poised-to-impact-life-and-health-insurance.

[8] Stroke Patient Regains Arm Control After Nine Years Using New Neurotechnology, WioNews (Feb. 22, 2023), https://www.wionews.com/trending/stroke-patients-can-regain-arm-control-using-new-neurotechnology-says-research-564285.

[9] Camilla Cavendish, Humanity is Sleepwalking into a Neurotech Disaster, Financial Times (Mar. 3, 2023), https://www.ft.com/content/e30d7c75-90a3-4980-ac71-61520504753b.

[10] Samuel, supra note 2.

[11] Id.

[12] Sigal Samuel, Facebook is Building Tech to Read your Mind. The Ethical Implications are Staggering, Vox (Aug. 5, 2019), https://www.vox.com/future-perfect/2019/8/5/20750259/facebook-ai-mind-reading-brain-computer-interface.

[13] Id.

[14] Levy & Taylor, supra note 1.

[15] Id.

[16] Manuela López Restrepo, Neurotech Could Connect Our Brains to Computers. What Could Go Wrong, Right?, NPR (Mar. 14, 2023), https://www.npr.org/2023/03/14/1163494707/neurotechnology-privacy-data-tracking-nita-farahany-battle-for-brain-book.

[17] Vanessa Bates Ramirez, Could Brain-Computer Interfaces Lead to ‘Mind Control for Good’?, Singularity Hub (Mar. 16, 2023), https://singularityhub.com/2023/03/16/mind-control-for-good-the-future-of-brain-computer-interfaces/.

[18] Restrepo, supra note 16.

[19] Samuel, supra note 12.

[20] Samuel, supra note 2.

[21] Id.

[22] Id.


Mental Health Telehealth Services May Not Be Protecting Your Data

Tessa Wright, MJLST Staffer

The COVID-19 pandemic changed much about our daily lives, and nowhere have those changes been more visible than in the healthcare industry. During the pandemic, there were overflowing emergency rooms coupled with doctor shortages.[1] In-person medical appointments were canceled, and non-emergency patients had to wait months for appointments.[2] In response, the use of telehealth services began to increase rapidly.[3] In fact, one 2020 study found that telehealth visits accounted for less than 1% of health visits prior to the pandemic and increased to as much as 80% of visits when the pandemic was at its peak.[4] And, while the use of telehealth services has decreased slightly in recent years, it seems as though it is likely here to stay. Nowhere has the use of telehealth services been more prevalent than in mental health services.[5] Indeed, as of 2022, telehealth still represented over 36% of outpatient mental health visits.[6] Moreover, a recent study found that since 2020, over one in three mental health outpatient visits have been delivered by telehealth.[7] And while this increased use in telehealth services has helped make mental health services more affordable and accessible to many Americans, this shift in the way healthcare is provided also comes with new legal concerns that have yet to be fully addressed.

Privacy Concerns for Healthcare Providers

One of the largest concerns surrounding the increased use of telehealth in mental health services is privacy. There are several reasons for this. The primary concern has been due to the fact that telehealth takes place over the phone or via personal computers. When using personal devices, it is nearly impossible to ensure HIPAA compliance. However, the majority of healthcare providers now offer telehealth options that connect directly to their private healthcare systems, which allows for more secure data transmission.[8] While there are still concerns surrounding this issue, these secure servers have helped mitigate much of the concern.[9]

Privacy Concerns with Mental Health Apps

The other privacy concern surrounding the use of telehealth services for mental health is a little more difficult to address. This concern comes from the increased use of mental health apps. Mental health apps are mobile apps that allow users to access online talk therapy and psychiatric care.[10] With the increased use of telehealth for mental health services, there has also been an increase in the use of these mental health apps. Americans are used to their private medical information being protected by the Health Insurance Portability and Accountability Act (HIPAA).[11] HIPAA is a federal law that creates privacy rules for our medical records and other individually identifiable health information during the flow of certain health care transactions.[12] But HIPAA wasn’t designed to handle modern technology.[13] The majority of mental health apps are not covered by HIPAA rules, meaning that these tech companies can sell the private health data from their apps to third parties, with or without consent.[14] In fact, a recent study that analyzed 578 mental health-related apps found that nearly half (44%) of the apps shared users’ personal health information with third parties.[15] This personal health information can include psychiatric diagnoses and medication prescriptions, as well as other identifiers including age, gender, ethnicity, religion, credit score, etc.[16]

In fact, according to a 2022 study, a popular therapy app, BetterHelp, was among the worst offenders in terms of privacy.[17] “BetterHelp has been caught in various controversies, including a ‘bait and switch’ scam where it advertised therapists that weren’t actually on its service, poor quality of care (including trying to provide gay clients with conversion therapy), and paying YouTube influencers if their fans sign up for therapy through the app.”[18]

An example of information that does get shared is the intake questionnaire.[19] An intake questionnaire needs to be filled out on BetterHelp, or other therapy apps, in order for the customer to be matched with a provider.[20] The answers to these intake questionnaires were specifically found to have been shared by BetterHelp with an analytics company, along with the approximate location and device of the user.[21]

Another example of the type of data that is shared is metadata.[22] BetterHelp can share information about how long someone uses the app, how long the therapy sessions are, how long someone spends sending messages on the app, what times someone logs into the app, what times someone sends a message or speaks to their therapists, the approximate location of the user, how often someone opens the app, and so on.[23] According to the ACLU, data brokers, Facebook, and Google were found to be among the recipients of other information shared from BetterHelp.[24]

It is also important to note that deleting an account may not remove all of your personal information, and there is no way of knowing what data will remain.[25] It remains unclear how long sensitive information that has been collected and retained could be available for use by the app.

What Solutions Are There?

The U.S. Department of Health and Human Services recently released updated guidance on HIPAA, confirming that the HIPAA Privacy Rule does not apply to most health apps because they are not “covered entities” under the law.[26]  Additionally, the FDA put out guidance saying that it is going to use its enforcement discretion when dealing with mental health apps.[27] This means that if the privacy risk seems to be low, the FDA is not going to enforce or chase these companies.[28]

Ultimately, if mental telehealth services are here to stay, HIPAA will need to be expanded to cover the currently unregulated field of mental health apps. HIPAA and state laws would need to be specifically amended to include digital app-based platforms as covered entities.[29] These mental health apps are offering telehealth services, similar to any healthcare provider that is covered by HIPAA. Knowledge that personal data is being shared so freely by mental health apps often leads to distrust, and due to those privacy concerns, many users have lost confidence in them. In the long run, regulatory oversight would increase the pressure on these companies to show that their service can be trusted, potentially increasing their success by growing their trust with the public as well.

Notes

[1] Gary Drenik, The Future of Telehealth in a Post-Pandemic World, Forbes, (Jun. 2, 2022), https://www.forbes.com/sites/garydrenik/2022/06/02/the-future-of-telehealth-in-a-post-pandemic-world/?sh=2ce7200526e1.

[2] Id.

[3] Id.

[4] Madjid Karimi, et. al., National Survey Trends in Telehealth Use in 2021: Disparities in Utilization and Audio vs. Video Services, Office of Health Policy (Feb. 1, 2022).

[5] Shreya Tewari, How to Navigate Mental Health Apps that May Share Your Data, ACLU (Sep. 28, 2022).

[6] Justin Lo, et. al., Telehealth has Played an Outsized Role Meeting Mental Health Needs During the Covid-19 Pandemic, Kaiser Family Foundation, (Mar. 15, 2022), https://www.kff.org/coronavirus-covid-19/issue-brief/telehealth-has-played-an-outsized-role-meeting-mental-health-needs-during-the-covid-19-pandemic/.

[7] Id.

[8] Supra note 1.

[9] Id.

[10] Heather Landi, With Consumers’ Health and Privacy on the Line, do Mental Wellness Apps Need More Oversight?, Fierce Healthcare, (Apr. 21, 2021), https://www.fiercehealthcare.com/tech/consumers-health-and-privacy-line-does-digital-mental-health-market-need-more-oversight.

[11] Peter Simons, Your Mental Health Information is for Sale, Mad in America, (Feb. 20, 2023), https://www.madinamerica.com/2023/02/mental-health-information-for-sale/.

[12] Supra note 5.

[13] Supra note 11.

[14] Id.

[15] Deb Gordon, Using a Mental Health App? New Study Says Your Data May Be Shared, Forbes, (Dec. 29, 2022), https://www.forbes.com/sites/debgordon/2022/12/29/using-a-mental-health-app-new-study-says-your-data-may-be-shared/?sh=fe47a5fcad2b.

[16] Id.

[17] Supra note 11.

[18] Id.

[19] Supra note 5.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Supra note 5.

[26] Id.

[27] Supra note 10.

[28] Id.

[29] Supra note 11.


Data Privacy Regulations in 2023: Is the New Standard Burdensome?

Yolanda Li, MJLST Staffer

Beginning in 2023, businesses will see enhanced regulations on data privacy. There has been an increase in legal requirements for company-held data in protection of companies’ customers as a number of proposed data security laws and regulations came into effect in 2023. Specifically, the FTC Safeguards Rule and the NIS2 Directive.

The FTC Safeguards Rule

The FTC Safeguards Rule came into force in December 2022. The FTC requires non-banking financial institutions “to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe.”[1] Non-banking financial institutions affected by this rule include mortgage brokers, motor vehicle dealers, and payday lenders. The Safeguards Rule is promulgated under the Gramm-Leach-Bliley Act of 1999, which requires financial institutions to “explain their information-sharing practices to their customers and to safeguard sensitive data.”[2] Financial institutions include companies that offer consumer financial products or services like loans, insurance, and financial or investment advice.[3] Specifically, the rule required that the covered financial institutions “designate a qualified individual to oversee their information security program, develop a written risk assessment, limit and monitor who can access sensitive customer information, encrypt all sensitive information, train security personnel, develop an incident response plan, periodically assess the security practices of service providers, and implement multi-factor authentication or another method with equivalent protection for any individual accessing customer information.”

One specific question that arises is whether the FTC Safeguards Rule will truly elevate data privacy standards. On its face the FTC Safeguards Rule does not run counter to the FTC’s mission of protecting consumers. However, the economic cost and effect behind the rule is debatable. One concern is that the rule may impose substantial costs, especially on small businesses, as the new burdens will render costs that may be unbearable for small businesses with less capital than large companies. According to Commissioner Christine S. Wilson, although financial institutions are already implementing many of the requirements under the rule, or have sophisticated programs that are easily adaptable to new obligations, there are still large burdens underestimated by the FTC Safeguards Rule.[4] Specifically, labor shortages have hampered efforts by financial institutions to implement information security systems. Supply chain issues caused delays in obtaining equipment for updating information systems. What is important to note is, according to Commissioner Wilson, most of these factors are outside the control of the financial institutions. Implementing a heightened standard would thus cause unfairness, especially to small financial institutions who have even more trouble obtaining the necessary equipment during times of supply chain and labor shortages.

Recognizing such difficulties, the FTC did offer a certain extent of leniency for implementation of the rule. Specifically, the FTC extended the deadline by six months, primarily due to supply chain issues that may result in delays and shortage of qualified personnel to implement information security programs. This extension is beneficial to the Rule because it offers the covered financial institutions time for adjustment and compliance.

Another concern that the FTC Safeguards Rule has raised is that the mandates will not result in a significant reduction in data security risks in protecting customers. The answer to this question is still uncertain as the FTC Safeguards Rule just came into effect, and the extension pushes out implementation even farther. One thing to note, however, is that during the rule-making process the FTC sought comments on the proposed Safeguards Rule and during that time extended the deadline for the public to submit comments to changes by 60 days in.[5] This fact may show that the FTC took careful consideration of how to most effectively reduce data security risks by giving the public ample time to weigh in.

NIS2 Directive

A corresponding law is the NIS2 Directive by the EU that came into force on January 16, 2023. This EU-wide legislation provides a variety of legal measures to boost cybersecurity. Specifically, it requires member states to be appropriately equipped with response and information systems, set up a Corporation Group to facilitate corporate exchange of information among member states, and ensure a culture of security that relies heavily on infrastructures, including financial market infrastructure.[6] The Directive also contains a variety of security and notification requirements for service providers to comply with. The NIS2 Directive echoes the FTC Safeguards Rule to a large extent regarding the elevated standard of cybersecurity measures.

However, the NIS2 Directive contains a different measure by implementing duties onto the European Union Agency for Cybersecurity (ENISA) itself. The Directive designates that ENISA assists Member States and the Corporation Groups set up under the Directive by “identifying good practices in the Member States regarding the implementation of the NIS directive, supporting the EU-wide reporting process for cybersecurity incidents, by developing thresholds, templates and tools, agreeing on common approaches and procedures, and helping Member States to address common cybersecurity issues.”[7] The Directive ordering the agency itself to facilitate the carrying out of the Directive may add to the likelihood of success. Although the outcome is uncertain, primarily because of the broad language of the Directive, at least burdens on financial institutions will be lessened to a certain extent. What distinguishes the NIS2 Directive from the FTC Safeguards Rule is that the Member States are given 21 months to transpose to their national legislative framework.[8] This time offers more flexibility as compared to the extension of the FTC Safeguards Rule. As the Directive passes through the legislative framework, more time will be allowed for financial institutions to prepare and respond to the proposed changes.

In summary, data privacy laws are tightening up globally, and the United States should look to and learn from the successes and failures of the EU’s Directive as both countries’ are attempting to do regulate a similar industry. That being said, regardless of the EU, financial institutions in the United States must begin paying attention to and complying with the FTC Safeguards Rule. Though the outcome of the Rule is uncertain, the 6-month extension will at least offer a certain degree of flexibility.

Notes

[1]https://www.ftc.gov/news-events/news/press-releases/2022/11/ftc-extends-deadline-six-months-compliance-some-changes-financial-data-security-rule; 16 CFR 314.

[2] https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act.

[3] Id.

[4] Concurring Statement of Commissioner Christine S. Wilson, Regarding Delaying the Effective Date of Certain Provisions of the Recently Amended Safeguards Rule (Nov 2022).

[5] https://www.ftc.gov/news-events/news/press-releases/2019/05/ftc-extends-comment-deadline-proposed-changes-safeguards-rule.

[6] https://digital-strategy.ec.europa.eu/en/policies/nis2-directive.

[7] https://www.enisa.europa.eu/topics/cybersecurity-policy/nis-directive-new#:~:text=On%2016%20January%202023%2C%20the,cyber%20crisis%20management%20structure%20(CyCLONe).

[8] Id.

 


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.