New Technology

What Happens to Your Genetic Data in a Sale or Acquisition?

Colin Loyd, MJLST Staffer

Remember 23andMe—the genetic testing company that once skyrocketed in publicity in the 2010s due to its relatively inexpensive access to genetic testing? It’s now heading toward disaster. This September, its board of directors saw all but one member tender their resignation.[1] At the close of that day’s trading, 23andMe’s share price was $0.35, representing a 99.9% decline in valuation from its peak in 2021.[2] This decline in valuation suggests the company may declare bankruptcy, which often leads to a sale of a company’s assets. Bankruptcy or the sale of assets present a host of complex privacy and regulatory issues, particularly concerning the sale of 23andMe’s most valuable asset—its vast collection of consumer DNA data.[3] This uncertain situation underscores serious concerns surrounding the state of comprehensive privacy protections for genetic information that leave consumers’ sensitive genetic data vulnerable to misuse and exploitation.

23andMe collects and stores massive amounts of user genetic information. However, unlike healthcare providers, 23andMe does not have to comply with the stringent privacy regulations set out in the Health Insurance Portability and Accountability Act (HIPAA).[4] While HIPAA is designed to protect sensitive health data, its protections apply only to a small subset of healthcare related entities.[5] HIPAA only regulates the use of genetic information by “group health plan[s], health insurance issuer[s] that issue[] health insurance coverage, or issuer[s] of a medicare supplemental policy.”[6] 23andMe does not fit into any of these categories and therefore operates outside the scope of HIPAA protections with respect to genetic information, leaving any genetic information it holds largely unregulated.

The Genetic Information Nondiscrimination Act (GINA), enacted in 2008, offers consumer protections by prohibiting discrimination based on an individual’s genetic information with respect to health insurance premium amounts or eligibility requirements for health insurance.[7] GINA also prohibits any deprivation of employment opportunities based on genetic information.[8] However, GINA’s protections do not extend to life insurance, disability insurance, or long-term care insurance.[9] This leaves a gap where genetic information may be used against individuals by entities not subject to GINA.

This regulatory gap is a major concern for consumers, especially with a potential bankruptcy sale looming. If 23andMe sells its assets, including its database of genetic information, the new owner would not have to adhere to the same privacy commitments made by 23andMe. For example, 23andMe promises not to use genetic information it receives for personalized or targeted marketing/advertising without a user’s express consent.[10] This policy likely reflects 23andMe’s efforts to comply with the California Privacy Rights Act (CPRA), which grants consumers the right to direct a business to not share or sell their personal information.[11] However, this right under the CPRA is an opt-out right—not an opt-in right—meaning consumers can stop a future sale of their information but by default there is no initial, regulatory limit on the sale of their personal information.[12] As a result, there’s nothing stopping 23andMe from altering its policies and changing how it uses genetic information. In fact, 23andMe’s Privacy Statement states it “may make changes to this Privacy Statement from time to time.”[13] Any such change would likely be binding if it is clearly communicated to users.[14] 23andMe currently lists email or an in-app notification as methods it may notify its users of any change to the Privacy Statement.[15] If it does so, it’s highly possible a court would view this as “clear communication” and there would be little legal recourse for users to prevent their genetic information from being used in ways they did not anticipate, such as for research or commercial purposes.

For example, say a life insurance company acquires an individual’s genetic data through the purchase of 23andMe’s assets. It could potentially use that data to make decisions about coverage or premiums, even though GINA prohibits health insurers to do the same.[16] This loophole highlights the dangers of having genetic information in the hands of entities not bound by strict privacy protections.

In the event of an acquisition or bankruptcy, 23andMe’s Privacy Statement outlines that personal information, including genetic data, may be among the assets sold or transferred to the new entity.[17] In such a case, the new owner could inherit both the data and the rights to use it under the existing terms, including the ability to modify how the data is used. This could result in uses not originally intended by the user so long as the change is communicated to the user.[18] This transfer clause highlights a key concern for users because it allows their deeply personal genetic data to be passed to another company without additional consent, potentially subjecting them to exploitation by organizations with different data usage policies or commercial interests. While 23andMe must notify users about any changes to the privacy statement or its use of genetic information, it does not specify whether the notice will be given in advance.[19] Any new entity could plan a change to the privacy statement terms–altering how it uses the genetic information while leaving users in the dark until the change is communicated to them, at which point the user’s information may have already been shared with third parties.

The potential 23andMe bankruptcy and sale of assets reveals deep flaws in the current regulatory system governing genetic data privacy. Without HIPAA protections, consumers risk their sensitive genetic information being sold or misused in ways they cannot control. GINA–while offering some protections–still leaves significant gaps, especially in life and disability insurance. As the demand for genetic testing continues to grow, the vulnerabilities exposed by 23andMe’s potential financial troubles highlight the urgent need for better privacy protections. Consumers must be made aware of the risks involved in sharing their genetic data, and regulatory measures must be strengthened to ensure this sensitive information is not misused or sold without their explicit consent.

 

Notes

[1] Independent Directors of 23andMe Resign from Board, 23andMe (Sept. 17, 2024) https://investors.23andme.com/news-releases/news-release-details/independent-directors-23andme-resign-board.

[2] Rolfe Winkler, 23andMe Board Resigns in New Blow to DNA-Testing Company, WALL ST. J. (Sept. 18, 2024) https://www.wsj.com/tech/biotech/23andme-board-resigns-in-new-blow-to-dna-testing-company-12f1a355.

[3] Anne Wojcicki (the last remaining board member) has consistently publicized her plan to take the company private, which is looming larger given the current state of the business financials. Id.

[4] See 42 U.S.C. § 1320d-9(a)(2).

[5] See generally 42 U.S.C. §1320d et seq.

[6] 42 U.S.C. § 1320d-9(a)(2).

[7] Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881.

[8] Id.

[9] Jessica D Tenenbaum & Kenneth W Goodman, Beyond the Genetic Information Nondiscrimination Act: Ethical and Economic Implications of the Exclusion of Disability, Long-term Care and Life Insurance, 14 Personalized Med. 153, 154 (2017).

[10] How We Use Your Information, 23andMe, https://www.23andme.com/legal/how-we-use-info/ (last visited Oct. 14, 2024).

[11] Cal. Civ. Code § 1798.120(a) (Deering 2024).

[12] Id.

[13] Privacy Statement, 23andMe (Sept. 24, 2024) https://www.23andme.com/legal/privacy/full-version/.

[14] See Lee v. Ticketmaster LLC, 817 Fed. App’x 393 (9th Cir. 2019)(upholding terms of use where notice was clearly given to the user, even if the user didn’t check a box to assent to the terms).

[15] Privacy Statement, supra note 13.

[16] See K.S.A. § 40-2259(c)-(d) (carving out the ability for life insurance policies to take into account genetic information when underwriting the policy).

[17] Privacy Statement, supra note 13.

[18] See Ticketmaster, 817 Fed. App’x 393 (2019).

[19] Privacy Statement, supra note 13.


Enriching and Undermining Justice: The Risks of Zoom Court

Matthew Prager, MJLST Staffer

In the spring of 2022, the United States shut down public spaces in response to the COVID-19 pandemic. The court system did not escape this process, seeing all jury trials paused in March 2022.[1] In this rapidly changing environment, courts scrambled to adjust using a slew of modern telecommunication and video conferencing systems to resume the various aspects of the courtroom system in the virtual world. Despite this radical upheaval to traditional courtroom structure, this new form of court appears here to stay.[2]

Much has been written about the benefits of telecommunication services like Zoom and similar software to the courtroom system.[3]  However, while Zoom court has been a boon to many, Zoom-style virtual court appearances also present legal challenges.[4] Some of these problems affect all courtroom participants, while others disproportionally affect highly vulnerable individuals’ ability to participate in the legal system.

Telecommunications, like all forms of technology, is vulnerable to malfunctions and ‘glitches’, and these glitches can have significant disadvantage on a party’s engagement with the legal system. In the most direct sense, glitches– be they video malfunction, audio or microphone failure, or unstable internet connections–can limit a party’s ability to hear and be heard by their attorneys, opposing parties or judge, ultimately compromising their legitimate participation in the legal process.[5]

But these glitches can have effects beyond affecting direct communications. One study found participants evaluated individuals suffering from connection issues as less likable.[6] Another study using mock jurors, found those shown a video on a broken VCR recommend higher prison terms than a group of mock jurors provided with a functional VCR.[7] In effect, technology can act as a third party in a courtroom, and when that third party misbehaves, frustrations can unjustly prejudice a party with deleterious consequences.

Even absent glitches, observing a person through a screen can have a negative impact on how that person is perceived.[8] Researchers have noted this issue even before the pandemic. Online bail hearings conducted by closed-circuit camera led to significantly higher bond amounts than those conducted in person.[9] Simply adjusting the camera angle can alter the perception of a witness in the eyes of the observer.[10]

These issues represent a universal problem for any party in the legal system, but they are especially impactful on the elderly population.[11] Senior citizens often lacks digital literacy with modern and emerging technologies, and may even find their first experience with these telecommunications systems is in a courtroom hearing– that is if they even have access to the necessary technology.[12] These issues can have extreme consequences, with one case of an elderly defendant violating their probation because they failed to navigate a faulty Zoom link.[13]  The elderly are especially vulnerable, as issues with technical literacy can be compounded by sensory difficulties. One party with bad eyesight found requiring communication through a screen functionally deprived him of any communication at all.[14]

While there has been some effort to return to the in-person court experience, the benefits of virtual trials are too significant to ignore.[15] Virtual court minimizes transportation costs, allows vulnerable parties to engage in the legal system from the safety and familiarity of their own home and simplifies the logistical tail of the courtroom process. These benefits are indisputable for many participants in the legal system. But these benefits are accompanied by drawbacks, and practicalities aside, the adverse and disproportionate impact on senior citizens in virtual courtrooms should be seen as a problem to solve and not simply endure.

Notes

[1] Debra Cassens Weiss, A slew of federal and state courts suspend trials or close for coronavirus threat, ABA JOURNAL (March 18, 2020) (https://www.abajournal.com/news/article/a-slew-of-federal-and-state-courts-jump-on-the-bandwagon-suspending-trials-for-coronavirus-threat)

[2] How Courts Embraced Technology, Met the Pandemic Challenge, and Revolutionized Their Operations, PEW, December 1, 2021 (https://www.pewtrusts.org/en/research-and-analysis/reports/2021/12/how-courts-embraced-technology-met-the-pandemic-challenge-and-revolutionized-their-operations).

[3] See Amy Petkovsek, A Virtual Path to Justice: Paving Smoother Roads to Courtroom Access, ABA (June 3, 2024) (https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/technology-and-the-law/a-virtual-path-to-justice) (finding that Zoom court: minimizes transportation costs for low-income, disabled or remote parties; allows parties to participate in court from a safe or trusted environment; minimizes disruptions for children who would otherwise miss entire days of school; protects undocumented individuals from the risk of deportation; diminishes courtroom reschedulings from parties lacking access to childcare or transportation and allows immune-compromised and other high health-risk parties to engage in the legal process without exposure to transmittable illnesses).

[4] Daniel Gielchinsky, Returning to Court in a Post-COVID Era: The Pros and Cons of a Virtual Court System, LAW.com (https://www.law.com/dailybusinessreview/2024/03/15/returning-to-court-in-a-post-covid-era-the-pros-and-cons-of-a-virtual-court-system/)

[5] Benefits & Disadvantages of Zoom Court Hearings, APPEL & MORSE, (https://www.appelmorse.com/blog/2020/july/benefits-disadvantages-of-zoom-court-hearings/) (last visited Oct. 7, 2024).

[6] Angela Chang, Zoom Trials as the New Normal: A Cautionary Tale, U. CHI. L. REV. (https://lawreview.uchicago.edu/online-archive/zoom-trials-new-normal-cautionary-tale) (“Participants in that study perceived their conversation partners as less friendly, less active and less cheerful when there were transmission delays. . . .compared to conversations without delays.”).

[7] Id.

[8]  Id. “Screen” interactions are remembered less vividly and obscure important nonverbal social cues.

[9] Id.

[10] Shannon Havener, Effects of Videoconferencing on Perception in the Courtroom (2014) (Ph.D. dissertation, Arizona State University).

[11] Virtual Justice? A National Study Analyzing the Transition to Remote Criminal Court, STANFORD CRIMINAL JUSTICE CENTER, Aug. 2021, at 78.

[12] Id. at 79 (describing how some parties lack access to phones, Wi-Fi or any methods of electronic communication).

[13] Ivan Villegas, Elderly Accused Violates Probation, VANGUARD NEWS GROUP (October 21, 2022) (https://davisvanguard.org/2022/10/elderly-accused-violates-probation-zoom-problems-defense-claims/)

[14] John Seasly, Challenges arise as the courtroom goes virtual, Injustice Watch (April 22, 2020) (https://www.injusticewatch.org/judges/court-administration/2020/challenges-arise-as-the-courtroom-goes-virtual/)

[15] Kara Berg, Leading Michigan judges call for return to in-person court proceedings (Oct. 2, 2024, 9:36:00 PM), (https://detroitnews.com/story/news/local/michigan/2024/10/02/leading-michigan-judges-call-for-return-to-in-person-court-proceedings/75484358007/#:~:text=Courts%20began%20heavily%20using%20Zoom,is%20determined%20by%20individual%20judges).


AR/VR/XR: Breaking the Wall of Legal Issues Used to Limit in Either the Real-World or the Virtual-World

Sophia Yao, MJLST Staffer

From Pokémon Go to the Metaverse,[1] VR headsets to XR glasses, vision technology is quickly changing our lives in many aspects. The best-known companies or groups that have joined this market include Apple’s Vision Products Group (VPG), Meta’s Reality Lab, Microsoft, and others. Especially after Apple published its Vision Pro in 2023, no one doubts that this technology will soon be a vital driver for both tech and business. Regardless of why, can this type of technology significantly impact human genes? What industries will be impacted by this technology? And what kinds of legal risks are to come?

Augmented Reality (“AR”) refers to a display of a real-world environment whose elements are augmented by (i.e., overlaid with) one or more layers of text, data, symbols, images, or other graphical display elements.[2] Virtual Reality (“VR”) is using a kind of device (e.g., headsets or multi-projected environments) to create a simulated and immersive environment that can provide an experience either similar to or completely different from the real world,[3] while Mixed Reality/Extended Reality (XR) glasses are relatively compact and sleek, and weigh much less than VR headsets.[4] XR’s most distinguished quality from VR is that individuals can still see the world around them with XR by projecting a translucent screen on top of the real world. Seemingly, the differences between these three vision technologies may soon be eliminated with the possibility of their combination into once device.

Typically, vision technology assists people in mentally processing 2-D information into a 3-D world by integrating digital information directly into real objects or environments. This can improve individuals’ ability to absorb information, make decisions, and execute required tasks quickly, efficiently, and accurately. However, many people report feeling nauseous after using such products, ear pain, and a disconnect between their eyes and body.[5] Even experts who use AR/VR products in emerging psychotherapy treatments admit that there have been adverse effects in AR/VR trials due to mismatching the direct contradiction between the visual system and the motion system.[6] Researchers also discovered that it affects the way people behave in social situations due to feeling less socially connected to others.[7]

In 2022, the global augmented reality market was valued at nearly $32 billion and is projected to reach $88 billion by 2026.[8] As indicated by industry specialists and examiners, outside of gaming, a significant portion of vision technology income will accumulate from e-commerce and retail (fashion and beauty), manufacturing, the education industry, healthcare, real estate, and e-sports, which will further impact entertainment, cost of living, and innovation.[9] To manage this tremendous opportunity, it is crucial to understand potential legal risks and develop a comprehensive legal strategy to address these upcoming challenges.

To expand one’s business model, it is important to maximize the protection of intellectual property (IP), including virtual worlds, characters, and experiences. Doing so also aligns with contractual concerns, service remedies, and liability for infringement of third-party IP. For example, when filing an IP prosecution, it is difficult to argue that the hardware-executing invention (characters or data information) is a unique machine, and that the designated steps performed by the hardware are special under MPEP § 2106.05(d).[10] Furthermore, the Federal Circuit has cautioned the abstraction of inventions – that “[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas…[T]read carefully in constructing this exclusionary principle lest it swallows all of the patent law.”[11]

From a consumer perspective, legal concerns may include data privacy, harassment, virtual trespass, or even violent attacks due to the aforementioned disconnect between individuals’ eyes and bodies. Courts’ views on virtual trespass created by vision technology devices is ambiguous. It is also unclear whether courts will accept the defense of error in judgment due to the adverse effects of using AR/VR devices. One of the most significant concerns is the protection of the younger generations, since they are often the target consumers and those who are spending the most time using these devices. Experts have raised concerns about the adverse effects of using AR/VR devices, questioning whether they negatively impact the mental and physical health of younger generations. Another concern is that these individuals may experience a decline in social communication skills and feel a stronger connection to machines rather than to human beings. Many other legal risks are hanging around the use of AR/VR devices, such as private data collection without consent by constantly scanning the users’ surrounding circumstances, although some contend that the Children’s Online Privacy Protection Act (COPPA) prohibits the collection of personally identifiable information if an operator believes a user to be under the age of thirteen.[12]

According to research trends, combining AR, VR, and MR/XR will allow users to transcend distance, time, and scale, to bring people together in shared virtual environments, enhance comprehension, communication, and decisionmaking efficiency. Once the boundaries between the real-world and virtual-world are eliminated, AR/VR devices will “perfectly” integrate with the physical world, whether or not we are prepared for this upcoming world.

Notes

[1] Eric Ravenscraft, What is the Meteverse, Exactly?, Wired (Jun. 15, 2023, 6:04 PM), https://www.wired.com/story/what-is-the-metaverse/.

[2] Travis Alley, ARTICLE: Pokemon Go: Emerging Liability Arising from Virtual Trespass for Augmented Reality Applications, 4 Tex. A&M J. Prop. L. 273 (2018).

[3] Law Offices of Salar Atrizadeh, Virtual and Augmented Reality Laws, Internet Law. Blog (Dec. 17, 2018), https://www.internetlawyer-blog.com/virtual-and-augmented-reality-laws/.

[4] Simon Hill, Review: Viture One XR Glasses, Wired (Sep. 1, 2023, 7:00 AM), https://www.wired.com/review/viture-one-xr-glasses/.

[5] Alexis Souchet, Virtual Reality has Negative Side Effects—New Research Shows That Can be a Problem in the Workplace, The Conversation (Aug. 8, 2023, 8:29 AM), https://theconversation.com/virtual-reality-has-negative-side-effects-new-research-shows-that-can-be-a-problem-in-the-workplace-210532#:~:text=Some%20negative%20symptoms%20of%20VR,nausea%20and%20increased%20muscle%20fatigue.

[6] John Torous et al., Adverse Effects of Virtual and Augmented Reality Interventions in Psychiatry: Systematic Review, JMIR Ment Health (May 5, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199391/.

[7] How Augmented Reality Affects People’s Behavior, Sci.Daily (May 22, 2019), https://www.sciencedaily.com/releases/2019/05/190522101944.htm.

[8] Augmented Reality (AR) Market by Device Type (Head-mounted Display, Head-up Display), Offering (Hardware, Software), Application (Consumer, Commercial, Healthcare), Technology, and Geography – Global Forecast, Mkt. and Mkt., https://www.marketsandmarkets.com/Market-Reports/augmented-reality-market-82758548.html.

[9] Hill, supra note 4.

[10] Manual of Patent Examining Proc. (MPEP) § 2106.05(d) (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_13d41_124 (explaining an evaluation standard on when determining whether a claim recites significantly more than a judicial exception depends on whether the additional elements(s) are well-understood, routine, conventional activities previously known to the industry).

[11] Manual of Patent Examining Proc. (MPEP) § 2106.04 (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_139db_e0; see also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (2016).

[12] 16 CFR pt. 312.


Conflicts of Interest and Conflicting Interests: The SEC’s Controversial Proposed Rule

Shaadie Ali, MJLST Staffer

A controversial proposed rule from the SEC on AI and conflicts of interest is generating significant pushback from brokers and investment advisers. The proposed rule, dubbed “Reg PDA” by industry commentators in reference to its focus on “predictive data analytics,” was issued on July 26, 2023.[1] Critics claim that, as written, Reg PDA would require broker-dealers and investment managers to effectively eliminate the use of almost all technology when advising clients.[2] The SEC claims the proposed rule is intended to address the potential for AI to hurt more investors more quickly than ever before, but some critics argue that the SEC’s proposed rule would reach far beyond generative AI, covering nearly all technology. Critics also highlight the requirement that conflicts of interest be eliminated or neutralized as nearly impossible to meet and a departure from traditional principles of informed consent in financial advising.[3]

The SEC’s 2-page fact sheet on Reg PDA describes the 239-page proposal as requiring broker-dealers and investment managers to “eliminate or neutralize the effect of conflicts of interest associated with the firm’s use of covered technologies in investor interactions that place the firm’s or its associated person’s interest ahead of investors’ interests.”[4] The proposal defines covered technology as “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes in an investor interaction.”[5] Critics have described this definition of “covered technology” as overly broad, with some going so far as to suggest that a calculator may be “covered technology.”[6] Despite commentators’ insistence, this particular contention is implausible – in its Notice of Proposed Rulemaking, the SEC stated directly that “[t]he proposed definition…would not include technologies that are designed purely to inform investors.”[7] More broadly, though, the SEC touts the proposal’s broadness as a strength, noting it “is designed to be sufficiently broad and principles-based to continue to be applicable as technology develops and to provide firms with flexibility to develop approaches to their use of technology consistent with their business model.”[8]

This move by the SEC comes amidst concerns raised by SEC chair Gary Gensler and the Biden administration about the potential for the concentration of power in artificial intelligence platforms to cause financial instability.[9] On October 30, 2023, President Biden signed an Executive Order that established new standards for AI safety and directed the issuance of guidance for agencies’ use of AI.[10] When questioned about Reg PDA at an event in early November, Gensler defended the proposed regulation by arguing that it was intended to protect online investors from receiving skewed recommendations.[11] Elsewhere, Gensler warned that it would be “nearly unavoidable” that AI would trigger a financial crisis within the next decade unless regulators intervened soon.[12]

Gensler’s explanatory comments have done little to curb criticism by industry groups, who have continued to submit comments via the SEC’s notice and comment process long after the SEC’s October 10 deadline.[13] In addition to highlighting the potential impacts of Reg PDA on brokers and investment advisers, many commenters questioned whether the SEC had the authority to issue such a rule. The American Free Enterprise Chamber of Commerce (“AmFree”) argued that the SEC exceeded its authority under both its organic statutes and the Administrative Procedures Act (APA) in issuing a blanket prohibition on conflicts of interest.[14] In their public comment, AmFree argued the proposed rule was arbitrary and capricious, pointing to the SEC’s alleged failure to adequately consider the costs associated with the proposal.[15] AmFree also invoked the major questions doctrine to question the SEC’s authority to promulgate the rule, arguing “[i]f Congress had meant to grant the SEC blanket authority to ban conflicts and conflicted communications generally, it would have spoken more clearly.”[16] In his scathing public comment, Robinhood Chief Legal and Corporate Affairs Officer Daniel M. Gallagher alluded to similar APA concerns, calling the proposal “arbitrary and capricious” on the grounds that “[t]he SEC has not demonstrated a need for placing unprecedented regulatory burdens on firms’ use of technology.”[17] Gallagher went on to condemn the proposal’s apparent “contempt for the ordinary person, who under the SEC’s apparent world view [sic] is incapable of thinking for himself or herself.”[18]

Although investor and broker industry groups have harshly criticized Reg PDA, some consumer protection groups have expressed support through public comment. The Consumer Federation of America (CFA) endorsed the proposal as “correctly recogniz[ing] that technology-driven conflicts of interest are too complex and evolve too quickly for the vast majority of investors to understand and protect themselves against, there is significant likelihood of widespread investor harm resulting from technology-driven conflicts of interest, and that disclosure would not effectively address these concerns.”[19] The CFA further argued that the final rule should go even further, citing loopholes in the existing proposal for affiliated entities that control or are controlled by a firm.[20]

More generally, commentators have observed that the SEC’s new prescriptive rule that firms eliminate or neutralize potential conflicts of interest marks a departure from traditional securities laws, wherein disclosure of potential conflicts of interest has historically been sufficient.[21] Historically, conflicts of interest stemming from AI and technology have been regulated the same as any other conflict of interest – while brokers are required to disclose their conflicts, their conduct is primarily regulated through their fiduciary duty to clients. In turn, some commentators have suggested that the legal basis for the proposed regulations is well-grounded in the investment adviser’s fiduciary duty to always act in the best interest of its clients.[22] Some analysts note that “neutralizing” the effects of a conflict of interest from such technology does not necessarily require advisers to discard that technology, but changing the way that firm-favorable information is analyzed or weighed, but it still marks a significant departure from the disclosure regime. Given the widespread and persistent opposition to the rule both through the note and comment process and elsewhere by commentators and analysts, it is unclear whether the SEC will make significant revisions to a final rule. While the SEC could conceivably narrow definitions of “covered technology,” “investor interaction,” and “conflicts of interest,” it is difficult to imagine how the SEC could modify the “eliminate or neutralize” requirement in a way that would bring it into line with the existing disclosure-based regime.

For its part, the SEC under Gensler is likely to continue pursuing regulations on AI regardless of the outcome of Reg PDA. Gensler has long expressed his concerns about the impacts of AI on market stability. In a 2020 paper analyzing regulatory gaps in the use of generative AI in financial markets, Gensler warned, “[e]xisting financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the risks posed by deep learning.”[23] Regardless of how the SEC decides to finalize its approach to AI in conflict of interest issues, it is clear that brokers and advisers are likely to resist broad-based bans on AI in their work going forward.

Notes

[1] Press Release, Sec. and Exch. Comm’n., SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Jul. 26, 2023).

[2] Id.

[3] Jennifer Hughes, SEC faces fierce pushback on plan to police AI investment advice, Financial Times (Nov. 8, 2023), https://www.ft.com/content/766fdb7c-a0b4-40d1-bfbc-35111cdd3436.

[4] Sec. Exch. Comm’n., Fact Sheet: Conflicts of Interest and Predictive Data Analytics (2023).

[5] Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers,  88 Fed. Reg. 53960 (Proposed Jul. 26, 2021) (to be codified at 17 C.F.R. pts. 240, 275) [hereinafter Proposed Rule].

[6] Hughes, supra note 3.

[7] Proposed Rule, supra note 5.

[8] Id.

[9] Stefania Palma and Patrick Jenkins, Gary Gensler urges regulators to tame AI risks to financial stability, Financial Times (Oct. 14, 2023), https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac.

[10] Fact Sheet, White House, President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (Oct. 30, 2023).

[11] Hughes, supra note 3.

[12] Palma, supra note 9.

[13] See Sec. Exch. Comm’n., Comments on Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (last visited Nov. 13, 2023), https://www.sec.gov/comments/s7-12-23/s71223.htm (listing multiple comments submitted after October 10, 2023).

[14] Am. Free Enter. Chamber of Com., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270180-652582.pdf.

[15] Id. at 14-19.

[16] Id. at 9.

[17] Daniel M. Gallagher, Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-271299-654022.pdf.

[18] Id. at 43.

[19] Consumer Fed’n. of Am., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270400-652982.pdf.

[20] Id.

[21] Ken D. Kumayama et al., SEC Proposes New Conflicts of Interest Rule for Use of AI by Broker-Dealers and Investment Advisers, Skadden (Aug. 10, 2023), https://www.skadden.com/insights/publications/2023/08/sec-proposes-new-conflicts.

[22] Colin Caleb, ANALYSIS: Proposed SEC Regs Won’t Allow Advisers to Sidestep AI, Bloomberg Law (Aug. 10, 2023), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-proposed-sec-regs-wont-allow-advisers-to-sidestep-ai.

[23] Gary Gensler and Lily Bailey, Deep Learning and Financial Stability (MIT Artificial Intel. Glob. Pol’y F., Working Paper 2020) (in which Gensler identifies several potential systemic risks to the financial system, including overreliance and uniformity in financial modeling, overreliance on concentrated centralized datasets, and the potential of regulators to create incentives for less-regulated entities to take on increasingly complex functions in the financial system).


Brushstroke Battles: Unraveling Copyright Challenges With AI Artistry

Sara Seid, MJLST Staffer

Introduction

Imagine this: after a long day of thinking and participating in society, you decided to curl up on the couch with your phone and crack open a new fanfiction to decompress.  Fanfiction, a fictional work of writing based on another fictional work, has increased in popularity due to the expansion and increased use of the internet. Many creators publish their works to websites like Archive of Our Own (AO3), or Tumblr. These websites are free and provide a community for creative minds to share their creative works. While the legality of fanfiction in general is debated, the real concern among creators is regarding AI-generated works. Original characters and works are being used for profit to “create” works through the use of Artificial Intelligence. Profits can be generated from fanfiction through the use of paid AI text generators to create written works, or through advertisements on platforms. What was once a celebration of favorite works has become tarnished through the theft of fanfiction by AI programs.

First Case to Address the Issue

Thaler v. Perlmutter is a new and instructive case on the issue of copyright and AI-generated creative works – namely artwork.[1] The action was brought by Stephen Thaler against the Copyright Office for denying his application for copyright due to the lack of human authorship.[2]  The D.C. Circuit court was the first to rule on whether AI-generated art can have copyright protections.[3] The court held that AI-created artwork could not be copyrighted.[4] In considering the plaintiff’s copyright registration application for “A Recent Entrance to Paradise,” the Register concluded that this particular work would not support a claim to copyright because the work “lacked human authorship and thus no copyright existed in the first instance.”[5] The plaintiff’s primary contention was that the artwork was produced by the computer program he created, and, through its AI capabilities, the product was his.[6]

The court went on to opine that copyright is designed to adapt with the times.[7] Underlying that adaptability, however, has been a “consistent understanding that human creativity is the sine qua non at the core of copyrightability,” even as that human creativity is channeled through new tools or into new media.[8] Therefore, despite the plaintiff’s creation of the computer program, the painting was not produced by a human, and not eligible for copyright. This opinion, while relevant and clear, still leaves unanswered questions regarding the extent to which humans are involved in AI-generated work.[9] What level of human involvement is necessary for an AI creation to qualify for copyright?[10] Is there a percentage to meet? Does the AI program require multiple humans to work on it as a prerequisite? Adaptability with the times, while essential, also means that there are new, developing questions about the right ways to address new technology and its capabilities.

Implications of the Case for Fanfiction

Artificial Intelligence is a new concern among scholars. While its accessibility and convenience create endless new possibilities for a multitude of careers, it also directly threatens creative professions and creative outlets. Without the consent of or authority from creators, AI can use algorithms that process artwork and fictional literary works created by fans to create its own “original” work. AI has the ability to be used to replace professional and amateur creative writers. Additionally, as AI technological capacity increases, it can mimic and reproduce art that resembles or belongs to a human artist.[11]

However, the main concern for artists is wondering what AI will do to creative human industries in general.[12] Additionally, legal scholars are equally as concerned about what AI means for copyright law.[13] The main type of AI that fanfiction writers are concerned about is Generative AI.[14] Essentially, huge datasets are scraped together to train the AI, and through a technical process the AI is able to devise new content that resembles the training data but isn’t identical.[15] Creators are outraged at what they consider to be theft of their artistic creations.[16] Artwork, such as illustrations for articles, books, or album covers may soon face competition from AI, undermining a thriving area of commercial art as well.[17]

Currently, fanfiction is protected under the doctrine of fair use, which allows creators to add new elements, criticism, or commentary to an already existing work, in a way that transforms it.[18] The next question likely to stem from Thaler will be whether AI creations are subject to the same protections that fan created works are.

The fear of the possible consequences of AI can be slightly assuaged through the reality that AI cannot accurately and genuinely capture human memory, thoughts, and emotional expression. These human skills will continue to make creators necessary for their connections to humanity and the ability to express that connection. How a fan resonates with a novel or T.V. show, and then produces a piece of work based on that feeling, is uniquely theirs. The decision in Thaler reaffirms this notion. AI does not offer the human creative element that is required to both receive copyright and also connect with viewers in a meaningful way.[19]

Furthermore, the difficulty with new technology like AI is that it’s impossible to immediately understand and can cause feelings of frustration or a sense of threat. Change is uncomfortable. However, with knowledge and experience, AI might be a useful tool for fanfiction creators.

The element of creative projects that make them so meaningful to people is the way that they can provide a true insight and experience that is relatable and distinctly human.[20] The alternative to banning AI or completing rendering human artists obsolete is to find a middle ground that protects both sides. The interests of technological innovation should not supersede the concerns of artists and creators.

Ultimately, as stated in Thaler, AI artwork that has no human authorship does not get copyright.[21] However, this still leaves unanswered questions that future cases will likely present before the courts. Are there protections that can be made for online creators’ artwork and fictional writings to prevent their use or presence in AI databases? The Copyright Act exists to be malleable and adaptable with time.[22] Human involvement and creative control will have to be assessed as AI becomes more prominent in personal and professional settings.

Notes

[1] Thaler v. Perlmutter, 2023 U.S. Dist. LEXIS 145823, *1.

[2] Id.

[3] Id.

[4] Id.

[5] Id.

[6] Id. at *3.

[7] Id. at *10.

[8] Id.

[9] https://www.natlawreview.com/article/judge-rules-content-generated-solely-ai-ineligible-copyright-ai-washington-report.

[10] Id.

[11] https://www.theguardian.com/artanddesign/2023/jan/23/its-the-opposite-of-art-why-illustrators-are-furious-about-ai#:~:text=AI%20doesn%27t%20do%20the,what%20AI%20art%20is%20doing.%E2%80%9D.

[12] https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney.

[13] https://www.reuters.com/legal/ai-generated-art-cannot-receive-copyrights-us-court-says-2023-08-21.

[14] https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney.

[15] Id.

[16] Id.

[17] Id.

[18] https://novelpad.co/blog/is-fanfiction-legal# (citing Campbell v. Acuff Rose Music, 510 U.S. 569 (1994).

[19] https://www.reuters.com/default/humans-vs-machines-fight-copyright-ai-art-2023-04-01/.

[20] https://news.harvard.edu/gazette/story/2023/08/is-art-generated-by-artificial-intelligence-real-art/.

[21] Thaler v. Perlmutter, 2023 U.S. Dist. LEXIS 145823, *1.

[22] Id. at *10.


Who Is Regulating Regulatory Public Comments?

Madeleine Rossi, MJLST Staffer

In 2015 the Federal Communications Commission (FCC) issued a rule on “Protecting and Promoting the Open Internet.”[1] The basic premise of these rules was that internet service providers had unprecedented control over access to information for much of the public. Those in favor of the new rules argued that broadband providers should be required to enable access to all internet content, without either driving or throttling traffic to particular websites for their own benefit. Opponents of these rules – typically industry players such as the same broadband providers that would be regulated – argued that such rules were burdensome and would prevent technological innovation. The fight over these regulations is colloquially known as the fight over “net neutrality.” 

In 2017 the FCC reversed course and put forth a proposal to repeal the 2015 regulations. Any time that an agency proposes a rule, or proposes to repeal a rule, they must go through the notice-and-comment rulemaking procedure. One of the most important parts of this process is the solicitation of public comments. Many rules get put forth without much attention or fanfare from the public. Some rules may only get hundreds of public comments, often coming from the industry that the rule is aimed at. Few proposed rules get attention from the public at large. However, the fight over net neutrality – both the 2015 rules and the repeal of those rules in 2017 – garnered significant public interest. The original 2015 rule amassed almost four million comments.[2] At the time, this was the most public comments that a proposed rule had ever received.[3] In 2017, the rule’s rescission blew past four million comments to acquire a total of almost twenty-two million comments.[4]

At first glance this may seem like a triumph for the democratic purpose of the notice-and-comment requirement. After all, it should be a good thing that so many American citizens are taking an interest in the rules that will ultimately determine how they can use the internet. Unfortunately, that was not the full story. New York Attorney General Letitia James released a report in May of 2021 detailing her office’s investigation into wide ranging fraud that plagued the notice-and-comment process.[5] Of the twenty-two million comments submitted about the repeal, a little under eight million of them were generated by a single college student.[6] These computer-generated comments were in support of the original regulations, but used fake names and fake comments.[7] Another eight million comments were submitted by lead generation companies that were hired by the broadband companies.[8] These companies stole individuals’ identities and submitted computer-generated comments on their behalf.[9] While these comments used real people’s identities, they fabricated the content in support of repealing the 2015 regulations.[10]

Attorney General James’ investigation showed that real comments, submitted by real people, were “drowned out by masses of fake comments and messages being submitted to the government to sway decision-making.”[11] When the investigation was complete, James’ office concluded that nearly eighteen of the twenty-two million comments received by the FCC in 2017 were faked.[12] The swarm of fake comments created the false perception that the public was generally split on the issue of net neutrality. In fact, anywhere from seventy-five to eighty percent of Americans say that they support net neutrality.[13]

This is not an issue that is isolated to the fight over net neutrality. Other rulemaking proceedings have been targeted as well, namely by the same lead generation firms involved in the 2017 notice-and-comment fraud campaign.[14] Attorney General James’ investigation found that regulatory agencies like the Environmental Protection Agency (EPA), which is responsible for promulgating rules that protect people and the environment from risk, had also been targeted by such campaigns.[15] When agencies like the FCC or EPA propose regulations for the protection of the public, the democratic process of notice-and-comment is completely upended when industry players are able to “drown out” real public voices.

So, what can be done to preserve the democratic nature of the notice-and-comment period? As the technology involved in these schemes advances, this is likely to become not only a reoccurring issue but one that could entirely subvert the regulatory process of rulemaking. One way that injured parties are fighting back is with lawsuits.

In May of 2023, Attorney General James announced that she had come to a second agreement with three of the lead generation firms involved with the 2017 scam to falsify public comments.[16] The three companies agreed to pay $615,000 in fines for their involvement.[17] This agreement came in addition to a previous agreement in which the three stipulated to paying four million dollars in fines and agreed to change future lead generating practices, and the litigation is ongoing.[18]

However, more must be done to ensure that the notice-and-comment process is not entirely subverted. Financial punishment after the fact does not account for the harm to the democratic process that is already done. Currently, the only recourse is to sue these companies for their fraudulent and deceptive practices. However, lawsuits will typically only result in financial losses. Financial penalties are important, but they will always come after the fact. Once litigation is under way, the harm has already been done to the American public.

Agencies need to ensure that they are keeping up with the pace of rapidly evolving technology so that they can properly vet the validity of the comments that they receive. While it is important to keep public commenting a relatively open and easy practice, having some kind of vetting procedure has become essential. Perhaps requiring an accompanying email address or phone number for each comment, and then sending a simple verification code. Email or phone numbers could also be contacted during the vetting process once the public comment period closes. While it would likely be impractical to contact each individual independently, a random sample would at least flag whether or not a coordinated and large-scale fake commenting campaign had taken place. 

Additionally, the legislature should keep an eye on fraudulent practices that impact the notice-and-comment process. Lawmakers can and should strengthen laws to punish companies that are engaged in these practices. For example, in Attorney General James’ report she recommends that lawmakers do at least two things. First, they should explicitly and statutorily prohibit “deceptive and unauthorized comments.”[19] To be effective these laws should establish large civil fines. Second, the legislature should “strengthen impersonation laws.”[20] Current impersonation laws were not designed with mass-impersonation fraud in mind. These statutes should be amended to increase penalties when many individuals are impersonated.

In conclusion, the use of fake comments to sway agency rulemaking is a problem that is only going to worsen with time and the advance of technology. This is a serious problem that should be taken as such by both agencies and the legislature. 

Notes

[1] 80 Fed. Reg. 19737.

[2] https://www.brookings.edu/articles/democratizing-and-technocratizing-the-notice-and-comment-process/.

[3] Id.

[4] Id.

[5] https://ag.ny.gov/press-release/2021/attorney-general-james-issues-report-detailing-millions-fake-comments-revealing.

[6] https://www.brookings.edu/articles/democratizing-and-technocratizing-the-notice-and-comment-process/.

[7] Id.

[8] Id.

[9] Id.

[10] Id.

[11] https://ag.ny.gov/press-release/2021/attorney-general-james-issues-report-detailing-millions-fake-comments-revealing.

[12] Id.

[13] https://thehill.com/policy/technology/435009-4-in-5-americans-say-they-support-net-neutrality-poll/, https://publicconsultation.org/united-states/three-in-four-voters-favor-reinstating-net-neutrality/.

[14] Id.

[15] https://apnews.com/article/settlement-fake-public-comments-net-neutrality-ae1f69a1f5415d9f77a41f07c3f6c358.

[16] Id.

[17] Id.

[18] https://apnews.com/article/government-and-politics-technology-business-9f10b43b6aacbc750dfc010ceaedaca7.

[19] https://ag.ny.gov/sites/default/files/oag-fakecommentsreport.pdf.

[20] Id.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


A New Iron Age: New Developments in Battery Technology

Poojan Thakrar, MJLST Staffer

Introduction

In coming years, both Great River Energy and Xcel Energy are installing pilot projects of a new iron-air battery technology.[1] Both utilities are working with Boston-based company Form Energy. Great River Energy, which is Minnesota’s second-largest energy provider, plans to install a 1.5-megawatt battery next to its natural gas plant in Cambridge, MN. Xcel Energy, the state’s largest energy provider, will deploy a 10-megawatt battery in Becker, MN and Pueblo, CO. The batteries can store energy for up to 100 hours, which the utilities emphasize as crucial due to their ability to provide power during multi-day blizzards. The projects may be online as early as 2025, Form Energy says.[2]

The greater backdrop for these battery projects is Minnesota’s new carbon-free targets. Earlier this year, with new control of both chambers, Minnesota Democrats passed a bill mandating 100 percent carbon-free energy by 2040.[3] Large utility-scale batteries such as the ones proposed by Great River Energy and Xcel can play an important role in that transition by mitigating intermittency concerns often associated with renewables.

Technology

This technology may be uniquely suited for a future in which utilities rely more heavily on batteries. While this technology is less energy-dense than traditional lithium-ion batteries, the iron used at the heart of the battery is more abundant than lithium. [4] This allows utilities to sidestep many of the concerns associated with lithium and other minerals required in traditional batteries.[5] Iron-air batteries also tend to be heavier and larger than lithium-ion batteries that store equivalent energy. For batteries in phones, laptops, and cars, weight and volume are important features to keep in mind. However, this new technology could help accelerate uptake of large utility-scale batteries, where weight and volume are of less concern.

If your high school chemistry is rust-y, take a look at this graphic by Form Energy. When discharging electricity, the battery ‘inhales’ oxygen from the air and converts pure iron into rust. This allows electrons to flow, as seen on the right side of the graphic. As the battery is charged, the rust ‘exhales’ oxygen and converts back to iron. The battery relies on this reversible rust cycle to ultimately store its electricity. Form Energy claims that its battery can store energy at one-tenth the cost of lithium-ion batteries.[6]

Administrative Procedures

Xcel has recently filed a petition with the Minnesota Public Utilities Commission (MPUC), which has jurisdiction over investor-owned utilities such as Xcel.[7] The March 6th petition seeks to recover the cost of the pilot battery project. This request was made pursuant to Minnesota statute 216B.16, subd. 7e, which allows a utility to recover costs associated with energy storage system pilot projects.

In addition, the pilot project qualifies for a standard 30 percent investment tax credit (ITC) as well as a 10 percent bonus under the federal Inflation Reduction Act because Becker, MN is an “energy community.”  An “energy community” is an area that formerly had a coal mine or coal-fired power plant that has since closed. Becker is home to the Sherco coal-fired power plant, which has been an important part of that city’s economy for decades. The pilot may also receive an additional 10 percent bonus through the IRA because of the battery’s domestic materials. Any cost recovery through a rider would only be for costs beyond applicable tax grants and potential future grant awards. The MPUC has opened a comment period until April 21st, 2023. The issue at hand is: should the Commission approve the Long Duration Energy Storage System Pilot proposed by Xcel Energy in its March 6, 2023 petition? [8]

As a member-owned cooperative, Great River Energy does not need approval from the MPUC to recover the price of the battery project through its rates.

Conclusion

Ultimately, this is a bet on an innovative technology by two of the largest electricity providers in the state. If approved by the MPUC, ratepayers will foot the bill for this new technology. However, new technology and large investment projects are crucial for a cleaner and more resilient energy future.

Notes

[1] See Kirsti Marohn, ‘Rusty’ batteries could hold key to Minnesota’s carbon-free power future, MPR News (Feb. 10, 2023), https://www.mprnews.org/story/2023/02/10/rusty-batteries-could-hold-key-to-carbonfree-power-future. See alsoRyan Kennedy, Retired coal sites to host multi-day iron-air batteries, PV Magazine (Jan. 26, 2023) https://pv-magazine-usa.com/2023/01/26/retired-coal-sites-to-host-multi-day-iron-air-batteries/.

[2] Andy Colthorpe, US utility Xcel to put Form Energy’s 100-hour iron-air battery at retiring coal power plant sites, Energy Storage News (Jan. 27, 2023), https://www.energy-storage.news/us-utility-xcel-to-put-form-energys-100-hour-iron-air-battery-at-retiring-coal-power-plant-sites/.

[3] Dana Ferguson, Walz signs carbon-free energy bill, prompting threat of lawsuit, MPR News (Feb. 7, 2023), https://www.mprnews.org/story/2023/02/07/walz-signs-carbonfree-energy-bill-prompting-threat-of-lawsuit.

[4] Form Energy Partners with Xcel Energy on Two Multi-day Energy Storage Projects, BusinessWire (Jan. 26, 2023), https://www.businesswire.com/news/home/20230126005202/en/Form-Energy-Partners-with-Xcel-Energy-on-Two-Multi-day-Energy-Storage-Projects

[5]See Amit Katwala, The Spiralling Environmental Cost of Our Lithium Battery Addiction, Wired UK (May 8, 2018), https://www.wired.co.uk/article/lithium-batteries-environment-impact/. See also The Daily, The Global Race to Mine the Metal of the Future, New York Times (Mar. 18, 2022), https://www.nytimes.com/2022/03/18/podcasts/the-daily/cobalt-climate-change.html

[6] https://formenergy.com/technology/battery-technology/ (last visited Apr. 6, 2023)

[7] Petition Long-Duration Energy Storage System Pilot Project at Sherco, page 4, Minnesota PUC (Mar 6, 2023),

https://www.edockets.state.mn.us/edockets/searchDocuments.do?method=showPoup&documentId={8043C886-0000-CC18-A0DF-1A2C7EA08FA1}&documentTitle=20233-193670-01

[8] Notice of Comment Period, Minnesota PUC (Mar 21, 2023),

https://www.edockets.state.mn.us/edockets/searchDocuments.do?method=showPoup&documentId={90760487-0000-C415-89F7-FDE36D038B2C}&documentTitle=20233-194113-01


Mental Health Telehealth Services May Not Be Protecting Your Data

Tessa Wright, MJLST Staffer

The COVID-19 pandemic changed much about our daily lives, and nowhere have those changes been more visible than in the healthcare industry. During the pandemic, there were overflowing emergency rooms coupled with doctor shortages.[1] In-person medical appointments were canceled, and non-emergency patients had to wait months for appointments.[2] In response, the use of telehealth services began to increase rapidly.[3] In fact, one 2020 study found that telehealth visits accounted for less than 1% of health visits prior to the pandemic and increased to as much as 80% of visits when the pandemic was at its peak.[4] And, while the use of telehealth services has decreased slightly in recent years, it seems as though it is likely here to stay. Nowhere has the use of telehealth services been more prevalent than in mental health services.[5] Indeed, as of 2022, telehealth still represented over 36% of outpatient mental health visits.[6] Moreover, a recent study found that since 2020, over one in three mental health outpatient visits have been delivered by telehealth.[7] And while this increased use in telehealth services has helped make mental health services more affordable and accessible to many Americans, this shift in the way healthcare is provided also comes with new legal concerns that have yet to be fully addressed.

Privacy Concerns for Healthcare Providers

One of the largest concerns surrounding the increased use of telehealth in mental health services is privacy. There are several reasons for this. The primary concern has been due to the fact that telehealth takes place over the phone or via personal computers. When using personal devices, it is nearly impossible to ensure HIPAA compliance. However, the majority of healthcare providers now offer telehealth options that connect directly to their private healthcare systems, which allows for more secure data transmission.[8] While there are still concerns surrounding this issue, these secure servers have helped mitigate much of the concern.[9]

Privacy Concerns with Mental Health Apps

The other privacy concern surrounding the use of telehealth services for mental health is a little more difficult to address. This concern comes from the increased use of mental health apps. Mental health apps are mobile apps that allow users to access online talk therapy and psychiatric care.[10] With the increased use of telehealth for mental health services, there has also been an increase in the use of these mental health apps. Americans are used to their private medical information being protected by the Health Insurance Portability and Accountability Act (HIPAA).[11] HIPAA is a federal law that creates privacy rules for our medical records and other individually identifiable health information during the flow of certain health care transactions.[12] But HIPAA wasn’t designed to handle modern technology.[13] The majority of mental health apps are not covered by HIPAA rules, meaning that these tech companies can sell the private health data from their apps to third parties, with or without consent.[14] In fact, a recent study that analyzed 578 mental health-related apps found that nearly half (44%) of the apps shared users’ personal health information with third parties.[15] This personal health information can include psychiatric diagnoses and medication prescriptions, as well as other identifiers including age, gender, ethnicity, religion, credit score, etc.[16]

In fact, according to a 2022 study, a popular therapy app, BetterHelp, was among the worst offenders in terms of privacy.[17] “BetterHelp has been caught in various controversies, including a ‘bait and switch’ scam where it advertised therapists that weren’t actually on its service, poor quality of care (including trying to provide gay clients with conversion therapy), and paying YouTube influencers if their fans sign up for therapy through the app.”[18]

An example of information that does get shared is the intake questionnaire.[19] An intake questionnaire needs to be filled out on BetterHelp, or other therapy apps, in order for the customer to be matched with a provider.[20] The answers to these intake questionnaires were specifically found to have been shared by BetterHelp with an analytics company, along with the approximate location and device of the user.[21]

Another example of the type of data that is shared is metadata.[22] BetterHelp can share information about how long someone uses the app, how long the therapy sessions are, how long someone spends sending messages on the app, what times someone logs into the app, what times someone sends a message or speaks to their therapists, the approximate location of the user, how often someone opens the app, and so on.[23] According to the ACLU, data brokers, Facebook, and Google were found to be among the recipients of other information shared from BetterHelp.[24]

It is also important to note that deleting an account may not remove all of your personal information, and there is no way of knowing what data will remain.[25] It remains unclear how long sensitive information that has been collected and retained could be available for use by the app.

What Solutions Are There?

The U.S. Department of Health and Human Services recently released updated guidance on HIPAA, confirming that the HIPAA Privacy Rule does not apply to most health apps because they are not “covered entities” under the law.[26]  Additionally, the FDA put out guidance saying that it is going to use its enforcement discretion when dealing with mental health apps.[27] This means that if the privacy risk seems to be low, the FDA is not going to enforce or chase these companies.[28]

Ultimately, if mental telehealth services are here to stay, HIPAA will need to be expanded to cover the currently unregulated field of mental health apps. HIPAA and state laws would need to be specifically amended to include digital app-based platforms as covered entities.[29] These mental health apps are offering telehealth services, similar to any healthcare provider that is covered by HIPAA. Knowledge that personal data is being shared so freely by mental health apps often leads to distrust, and due to those privacy concerns, many users have lost confidence in them. In the long run, regulatory oversight would increase the pressure on these companies to show that their service can be trusted, potentially increasing their success by growing their trust with the public as well.

Notes

[1] Gary Drenik, The Future of Telehealth in a Post-Pandemic World, Forbes, (Jun. 2, 2022), https://www.forbes.com/sites/garydrenik/2022/06/02/the-future-of-telehealth-in-a-post-pandemic-world/?sh=2ce7200526e1.

[2] Id.

[3] Id.

[4] Madjid Karimi, et. al., National Survey Trends in Telehealth Use in 2021: Disparities in Utilization and Audio vs. Video Services, Office of Health Policy (Feb. 1, 2022).

[5] Shreya Tewari, How to Navigate Mental Health Apps that May Share Your Data, ACLU (Sep. 28, 2022).

[6] Justin Lo, et. al., Telehealth has Played an Outsized Role Meeting Mental Health Needs During the Covid-19 Pandemic, Kaiser Family Foundation, (Mar. 15, 2022), https://www.kff.org/coronavirus-covid-19/issue-brief/telehealth-has-played-an-outsized-role-meeting-mental-health-needs-during-the-covid-19-pandemic/.

[7] Id.

[8] Supra note 1.

[9] Id.

[10] Heather Landi, With Consumers’ Health and Privacy on the Line, do Mental Wellness Apps Need More Oversight?, Fierce Healthcare, (Apr. 21, 2021), https://www.fiercehealthcare.com/tech/consumers-health-and-privacy-line-does-digital-mental-health-market-need-more-oversight.

[11] Peter Simons, Your Mental Health Information is for Sale, Mad in America, (Feb. 20, 2023), https://www.madinamerica.com/2023/02/mental-health-information-for-sale/.

[12] Supra note 5.

[13] Supra note 11.

[14] Id.

[15] Deb Gordon, Using a Mental Health App? New Study Says Your Data May Be Shared, Forbes, (Dec. 29, 2022), https://www.forbes.com/sites/debgordon/2022/12/29/using-a-mental-health-app-new-study-says-your-data-may-be-shared/?sh=fe47a5fcad2b.

[16] Id.

[17] Supra note 11.

[18] Id.

[19] Supra note 5.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Supra note 5.

[26] Id.

[27] Supra note 10.

[28] Id.

[29] Supra note 11.


Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.