Articles by mjlst

The Policy Future for Telehealth After the Pandemic

Jack Atterberry, MJLST Staffer

The Pandemic Accelerated Telehealth Utilization

Before the Covid-19 pandemic began, telehealth usage in the United States healthcare system was insignificant (rounding to 0%) as a percentage of total outpatient care visits.[1] In the two years after the beginning of the pandemic, telehealth usage soared to over 10% of outpatient visits and has been widely used across all payer categories including Medicare and Medicaid.[2] The social distancing realities during the pandemic years coupled with federal policy measures allowed for this radical transition toward telehealth care visits.

In response to the onset of Covid-19, the US federal government relaxed and modified many telehealth regulations which have expanded the permissible access of telehealth care services. After a public health emergency was declared in early 2020, the Center for Medicare & Medicaid Services (CMS) and the Department of Health and Human Services (HHS) modified preexisting telehealth-related regulations to expand the permissible use of those services.  Specifically, CMS temporarily expanded Medicare coverage to include telehealth services without the need for in-person visits, removed telehealth practice restrictions such as expanding the type of providers that could provide telehealth, and increased the reimbursement rates for telehealth services to bring them closer to in-person visit rates.[3] In addition, HHS implemented modifications such as greater HIPAA flexibility by easing requirements around using popular communication platforms such as Zoom, Skype, and FaceTime provided that they are used in good faith.[4]  Collectively, these changes helped lead to a significant rise in telehealth services and expanded access to care for many people that otherwise would not receive healthcare.  Unfortunately, many of these telehealth policy provisions are set to expire in 2024, leaving open the question of whether the benefits of telehealth care expansion will be here to stay after the public emergency measures end.[5]

Issues with Telehealth Care Delivery Between States

A big legal impediment to telehealth expansion in the US is the complex interplay of state and federal laws and regulations impacting telehealth care delivery. At the state level, key state differences in the following areas have historically held back the expansion of telehealth.  First, licensing and credentialing requirements for healthcare providers are most often licensed at the state level – this has created a barrier for providers who want to offer telehealth services across state lines. While many states have implemented temporary waivers or joined interstate medical licensure compacts to address this issue during the pandemic, many states have not done so and huge inconsistencies exist. Besides these issues, states also differ with regard to reimbursement policy as states differ significantly in how different payer types insure differently in different regions—this has led to confusion for providers about whether to deliver care in certain states for fear of not getting reimbursed adequately. Although the federal health emergency helped ease interstate telehealth restrictions since the pandemic started, these challenges will likely persist after the temporary telehealth measures are lifted at the end of 2024.

What the pandemic-era temporary easing of telehealth restrictions taught us is that interstate telehealth improves health outcomes, increases patient satisfaction, and decreases gaps in care delivery.  In particular, rural communities and other underserved areas with relatively fewer healthcare providers benefited greatly from the ability to receive care from an out of state provider.  For example, patients in states like Montana, North Dakota, and South Dakota benefit immensely from being able to talk with an out of state mental health provider because of the severe shortages of psychiatrists, psychologists, and other mental health practitioners in those states.[6]  In addition, a 2021 study by the Bipartisan Policy Center highlighted that patients in states which joined interstate licensure compacts experienced a noticeable improvement in care experience and healthcare workforces experienced a decreased burden on their chronically stressed providers.[7]  These positive outcomes resulting from eased interstate healthcare regulations should inform telehealth policy moving forward.

Policy Bottlenecks to Telehealth Care Access Expansion

The presence of telehealth in American healthcare is surprisingly uncertain as the US emerges from the pandemic years.  As the public health emergency measures which removed various legal and regulatory barriers to accessing telehealth expire next year, many Americans could be left without access to healthcare via telehealth services. To ensure that telehealth remains a part of American healthcare moving forward, federal and state policy makers will need to act to bring about long term certainty in the telehealth regulatory framework.  In particular, advocacy groups such as the American Telehealth Association recommend that policy makers focus on key policy changes such as removing licensing barriers to interstate telehealth care, modernizing reimbursement payment structures to align with value-based payment principles, and permanently adopting pandemic-era telehealth access for Medicare, Federally Qualified Health Centers, and Rural Health Clinics.[8]  In addition, another valuable federal regulatory policy change would be to continue allowing the prescription of controlled substances without an in-person visit.  This would entail modifying the Ryan Haight Act, which requires an in-person medical exam before prescribing controlled substances.[9]  Like any healthcare reform in the US, cementing these lasting telehealth policy changes as law will be a major uphill battle.  Nonetheless, expanding access to telehealth could be a bipartisan policy opportunity for lawmakers as it would bring about expanded access to care and help drive the transition toward value-based care leading to better health outcomes for patients.

Notes

[1] https://www.healthsystemtracker.org/brief/outpatient-telehealth-use-soared-early-in-the-covid-19-pandemic-but-has-since-receded/

[2] https://www.cms.gov/newsroom/press-releases/new-hhs-study-shows-63-fold-increase-medicare-telehealth-utilization-during-pandemic#:~:text=Taken%20as%20a%20whole%2C%20the,Island%2C%20New%20Hampshire%20and%20Connecticut.

[3] https://telehealth.hhs.gov/providers/policy-changes-during-the-covid-19-public-health-emergency

[4] Id.

[5] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[6] https://thinkbiggerdogood.org/enhancing-the-capacity-of-the-mental-health-and-addiction-workforce-a-framework/?_cldee=anVsaWFkaGFycmlzQGdtYWlsLmNvbQ%3d%3d&recipientid=contact-ddf72678e25aeb11988700155d3b3c69-e949ac3beff94a799393fb4e9bbe3757&utm_source=ClickDimensions&utm_medium=email&utm_campaign=Health%20%7C%20Mental%20Health%20Access%20%7C%2010.19.21&esid=e4588cef-7520-ec11-b6e6-002248246368

[7] https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2021/11/BPC-Health-Licensure-Brief_WEB.pdf

[8] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[9] https://www.aafp.org/pubs/fpm/issues/2021/0500/p9.html


A New Iron Age: New Developments in Battery Technology

Poojan Thakrar, MJLST Staffer

Introduction

In coming years, both Great River Energy and Xcel Energy are installing pilot projects of a new iron-air battery technology.[1] Both utilities are working with Boston-based company Form Energy. Great River Energy, which is Minnesota’s second-largest energy provider, plans to install a 1.5-megawatt battery next to its natural gas plant in Cambridge, MN. Xcel Energy, the state’s largest energy provider, will deploy a 10-megawatt battery in Becker, MN and Pueblo, CO. The batteries can store energy for up to 100 hours, which the utilities emphasize as crucial due to their ability to provide power during multi-day blizzards. The projects may be online as early as 2025, Form Energy says.[2]

The greater backdrop for these battery projects is Minnesota’s new carbon-free targets. Earlier this year, with new control of both chambers, Minnesota Democrats passed a bill mandating 100 percent carbon-free energy by 2040.[3] Large utility-scale batteries such as the ones proposed by Great River Energy and Xcel can play an important role in that transition by mitigating intermittency concerns often associated with renewables.

Technology

This technology may be uniquely suited for a future in which utilities rely more heavily on batteries. While this technology is less energy-dense than traditional lithium-ion batteries, the iron used at the heart of the battery is more abundant than lithium. [4] This allows utilities to sidestep many of the concerns associated with lithium and other minerals required in traditional batteries.[5] Iron-air batteries also tend to be heavier and larger than lithium-ion batteries that store equivalent energy. For batteries in phones, laptops, and cars, weight and volume are important features to keep in mind. However, this new technology could help accelerate uptake of large utility-scale batteries, where weight and volume are of less concern.

If your high school chemistry is rust-y, take a look at this graphic by Form Energy. When discharging electricity, the battery ‘inhales’ oxygen from the air and converts pure iron into rust. This allows electrons to flow, as seen on the right side of the graphic. As the battery is charged, the rust ‘exhales’ oxygen and converts back to iron. The battery relies on this reversible rust cycle to ultimately store its electricity. Form Energy claims that its battery can store energy at one-tenth the cost of lithium-ion batteries.[6]

Administrative Procedures

Xcel has recently filed a petition with the Minnesota Public Utilities Commission (MPUC), which has jurisdiction over investor-owned utilities such as Xcel.[7] The March 6th petition seeks to recover the cost of the pilot battery project. This request was made pursuant to Minnesota statute 216B.16, subd. 7e, which allows a utility to recover costs associated with energy storage system pilot projects.

In addition, the pilot project qualifies for a standard 30 percent investment tax credit (ITC) as well as a 10 percent bonus under the federal Inflation Reduction Act because Becker, MN is an “energy community.”  An “energy community” is an area that formerly had a coal mine or coal-fired power plant that has since closed. Becker is home to the Sherco coal-fired power plant, which has been an important part of that city’s economy for decades. The pilot may also receive an additional 10 percent bonus through the IRA because of the battery’s domestic materials. Any cost recovery through a rider would only be for costs beyond applicable tax grants and potential future grant awards. The MPUC has opened a comment period until April 21st, 2023. The issue at hand is: should the Commission approve the Long Duration Energy Storage System Pilot proposed by Xcel Energy in its March 6, 2023 petition? [8]

As a member-owned cooperative, Great River Energy does not need approval from the MPUC to recover the price of the battery project through its rates.

Conclusion

Ultimately, this is a bet on an innovative technology by two of the largest electricity providers in the state. If approved by the MPUC, ratepayers will foot the bill for this new technology. However, new technology and large investment projects are crucial for a cleaner and more resilient energy future.

Notes

[1] See Kirsti Marohn, ‘Rusty’ batteries could hold key to Minnesota’s carbon-free power future, MPR News (Feb. 10, 2023), https://www.mprnews.org/story/2023/02/10/rusty-batteries-could-hold-key-to-carbonfree-power-future. See alsoRyan Kennedy, Retired coal sites to host multi-day iron-air batteries, PV Magazine (Jan. 26, 2023) https://pv-magazine-usa.com/2023/01/26/retired-coal-sites-to-host-multi-day-iron-air-batteries/.

[2] Andy Colthorpe, US utility Xcel to put Form Energy’s 100-hour iron-air battery at retiring coal power plant sites, Energy Storage News (Jan. 27, 2023), https://www.energy-storage.news/us-utility-xcel-to-put-form-energys-100-hour-iron-air-battery-at-retiring-coal-power-plant-sites/.

[3] Dana Ferguson, Walz signs carbon-free energy bill, prompting threat of lawsuit, MPR News (Feb. 7, 2023), https://www.mprnews.org/story/2023/02/07/walz-signs-carbonfree-energy-bill-prompting-threat-of-lawsuit.

[4] Form Energy Partners with Xcel Energy on Two Multi-day Energy Storage Projects, BusinessWire (Jan. 26, 2023), https://www.businesswire.com/news/home/20230126005202/en/Form-Energy-Partners-with-Xcel-Energy-on-Two-Multi-day-Energy-Storage-Projects

[5]See Amit Katwala, The Spiralling Environmental Cost of Our Lithium Battery Addiction, Wired UK (May 8, 2018), https://www.wired.co.uk/article/lithium-batteries-environment-impact/. See also The Daily, The Global Race to Mine the Metal of the Future, New York Times (Mar. 18, 2022), https://www.nytimes.com/2022/03/18/podcasts/the-daily/cobalt-climate-change.html

[6] https://formenergy.com/technology/battery-technology/ (last visited Apr. 6, 2023)

[7] Petition Long-Duration Energy Storage System Pilot Project at Sherco, page 4, Minnesota PUC (Mar 6, 2023),

https://www.edockets.state.mn.us/edockets/searchDocuments.do?method=showPoup&documentId={8043C886-0000-CC18-A0DF-1A2C7EA08FA1}&documentTitle=20233-193670-01

[8] Notice of Comment Period, Minnesota PUC (Mar 21, 2023),

https://www.edockets.state.mn.us/edockets/searchDocuments.do?method=showPoup&documentId={90760487-0000-C415-89F7-FDE36D038B2C}&documentTitle=20233-194113-01


Generate a JLST Blog Post: In the Absence of Regulation, Generative AI May Be Reigned in Through the Courts

Ted Mathiowetz, MJLST Staffer

In the space of a year, artificial intelligence (AI) has seemed to have grabbed hold of the contemporary conversation of technology and calls for increased regulation. With ChatGPT’s release in late-November of 2022 as well as the release of various other art generation softwares earlier in the year, the conversation surrounding tech regulation was quickly centered onto AI. In the wake of growing Congressional focus over AI, the White House quickly proposed a blueprint for a preliminary AI Bill of Rights as fears over unregulated advances in technology have grown.[1] The debate has raged on over the potential efficacy of this Bill of Rights and if it could be enacted in time to reign in AI development.[2] But, while Washington weighs whether the current regulatory framework will effectively set some ground rules, the matter of AI has already begun to be litigated.[3]

The growing fear over the power of AI has been mounting in numerous sectors as ChatGPT has proven its capabilities to pass exams such as the Multistate Bar Exam,[4] the US Medical Exam, and more.[5] Fears over AI’s capabilities and potential advancements are not just reaching academia either. The legal industry is already circling the wagons to prevent AI lawyers from representing would-be clients in court.[6] Edelson, a law firm based in Chicago, filed a class action complaint in California state court alleging that DoNotPay, an AI service that markets itself as “the world’s first robot lawyer” unlawfully provides a range of legal services.[7] The complaint alleges that DoNotPay is engaging in unlawful business practice by “holding itself out to be an attorney”[8] and “engaging in the unlawful practice of law by selling legal services… when it was not licensed to practice law.”[9]

Additional litigation has been filed against the makers of AI art generators, alleging copyright violations.[10]  The plaintiffs argue that a swath of AI firms have violated the Digital Millennium Copyright Act in constructing their AI models by using software that copied millions of images as a reference for the AI in building out user-requested images without compensation for those whose images were copied.[11] Notably, both of these suits are class-action lawsuits[12] and may serve as a strong blueprint for how weary parties can reign in AI through the court system.

Faridian v. DONOTPAY, Inc. — The Licensing Case

AI is here to stay for the legal industry, for better or worse.[13] However, where some have been sounding the alarm for years that AI will replace lawyers altogether,[14] the truth is likely to be quite different, with AI becoming a tool that helps lawyers become more efficient.[15] There are nonetheless existential threats to the industry as is seen in the Faridian case whereby DoNotPay is allowing people to write wills, contracts, and more without the help of a trained legal professional. This has led to shoddy AI-generated work, which creates concern that AI legal technology will likely lead to more troublesome legal action down-the-line for its users.[16]

It seems as though the AI Lawyer revolution may not be around to stay much longer as, in addition to the Faridian case, which sees DoNotPay being sued for their robot lawyer mainly engaging in transactional work, they have also run into problems trying to litigate. DoNotPay tried to get their AI Attorney into court to dispute traffic tickets and were later “forced” to withdraw the technology’s help in court after “multiple state bar associations [threatened]” to sue and they were cautioned that the move could see potential prison time for the CEO, Joshua Browder.[17]

Given that most states require applicants to the bar to 1) complete a Juris Doctor program from an accredited institution, 2) pass the bar exam, and 3) pass moral character evaluations in order to practice law, it’s rather likely that robot lawyers will not see a courtroom for some time, if ever. Instead, there may be a pro se revolution of sorts wherein litigants aid themselves with the help of AI legal services outside of the courtroom.[18] But, for the most part the legal field will likely incorporate AI into its repository of technology rather than be replaced by it. Nevertheless, the Faridian case, depending on its outcome, will likely provide a clear path forward for occupations with extensive licensing requirements that are endangered by AI advancement to litigate.

Sarah Andersen et al., v. Stability AI Ltd. — The Copyright Case

For occupations which do not have barriers to entry in the same way the legal field does, there is another way forward in the courts to try and stem the tide of AI in the absence of regulation. In the Andersen case, a class of artists have brought suit against various AI Art generation companies for infringing upon their copyrighted artwork by using their work to create the reference framework for their generated images.[19] The function of the generative AI is relatively straightforward. For example, if I were to log-on to an AI art generator and type in “Generate Lionel Messi in the style of Vincent Van Gogh” it would produce an image of Lionel Messi in the style of Van Gogh’s “Self-Portrait with a Bandaged Ear.” There is no copyright on Van Gogh’s artwork, but the AI accesses all kinds of copyrighted artwork in the style of Van Gogh for reference points as well as copyrighted images of Lionel Messi to create the generated image. The AI Image services have thus created a multitude of legal issues that their parent companies face including claims of direct copyright Infringement by storing copies of the works in building out the system, vicarious copyright Infringement when consumers generate artwork in the style of a given artist, and DMCA violations by not properly attributing existing work, among other claims.[20]

This case is being watched and is already being hotly debated as a ruling against AI could lead to claims against other generative AI such as ChatGPT for not properly attributing or paying for material that it’s used in building out its AI.[21] Defendants have claimed that the use of copyrighted material constitutes fair use, but these claims have not yet been fully litigated, so we will have to wait for a decision to come down on that front.[22] It is clear that as fast as generative AI seemed to take hold of the world, litigation has ramped up calling its future into question. Other world governments are also becoming increasingly weary of the technology, with Italy already banning ChatGPT and Germany heavily considering it, citing “data security concerns.”[23] It remains to be seen how the United States will deal with this new technology in terms of regulation or an outright ban, but it’s clear that the current battleground is in the courts.

Notes

[1] See Blueprint for an AI Bill of Rights, The White House (Oct. 5, 2022), https://www.whitehouse.gov/ostp/ai-bill-of-rights/; Pranshu Verma, The AI ‘Gold Rush’ is Here. What will it Bring? Wash. Post (Jan. 20, 2023), https://www.washingtonpost.com/technology/2023/01/07/ai-2023-predictions/.

[2] See Luke Hughest, Is an AI Bill of Rights Enough?, TechRadar (Dec. 10, 2022), https://www.techradar.com/features/is-an-ai-bill-of-rights-enough; see also Ashley Gold, AI Rockets ahead in Vacuum of U.S. Regulation, Axios (Jan. 30, 2023), https://www.axios.com/2023/01/30/ai-chatgpt-regulation-laws.

[3] Ashley Gold supra note 2.

[4] Debra Cassens Weiss, Latest Version of ChatGPT Aces Bar Exam with Score nearing 90th Percentile, ABA J. (Mar. 16, 2023), https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile.

[5] See e.g., Lakshmi Varanasi, OpenAI just announced GPT-4, an Updated Chatbot that can pass everything from a Bar Exam to AP Biology. Here’s a list of Difficult Exams both AI Versions have passed., Bus. Insider (Mar. 21, 2023), https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1.

[6] Stephanie Stacey, ‘Robot Lawyer’ DoNotPay is being Sued by a Law Firm because it ‘does not have a Law Degree’, Bus. Insider(Mar. 12, 2023), https://www.businessinsider.com/robot-lawyer-ai-donotpay-sued-practicing-law-without-a-license-2023-3

[7] Sara Merken, Lawsuit Pits Class Action Firm against ‘Robot Lawyer’ DoNotPay, Reuters (Mar. 9, 2023), https://www.reuters.com/legal/lawsuit-pits-class-action-firm-against-robot-lawyer-donotpay-2023-03-09/.

[8] Complaint at 2, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[9] Id. at 10.

[10] Riddhi Setty, First AI Art Generator Lawsuits Threaten Future of Emerging Tech, Bloomberg L. (Jan. 20, 2023), https://news.bloomberglaw.com/ip-law/first-ai-art-generator-lawsuits-threaten-future-of-emerging-tech.

[11] Complaint at 1, 13, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[12] Id. at 12; Complaint at 1, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[13] See e.g., Chris Stokel-Walker, Generative AI is Coming for the Lawyers, Wired (Feb. 21, 2023), https://www.wired.com/story/chatgpt-generative-ai-is-coming-for-the-lawyers/.

[14] Dan Mangan, Lawyers could be the Next Profession to be Replaced by Computers, CNBC (Feb.17, 2017), https://www.cnbc.com/2017/02/17/lawyers-could-be-replaced-by-artificial-intelligence.html.

[15] Stokel-Walker, supra note 13.

[16] Complaint at 7, Jonathan Faridian v. DONOTPAY, Inc., Docket No. CGC-23-604987 (Cal. Super. Ct. 2023).

[17] Debra Cassens Weiss, Traffic Court Defendants lose their ‘Robot Lawyer’, ABA J. (Jan. 26, 2023), https://www.abajournal.com/news/article/traffic-court-defendants-lose-their-robot-lawyer#:~:text=Joshua%20Browder%2C%20a%202017%20ABA,motorists%20contest%20their%20traffic%20tickets..

[18] See Justin Snyder, RoboCourt: How Artificial Intelligence can help Pro Se Litigants and Create a “Fairer” Judiciary, 10 Ind. J.L. & Soc. Equality 200 (2022).

[19] See Complaint, Sarah Andersen et al., v. Stability AI Ltd., et al., Docket No. 3:23-cv-00201 (N.D. Cal. 2023).

[20] Id. at 10–12.

[21] See e.g., Dr. Lance B. Eliot, Legal Doomsday for Generative AI ChatGPT if Caught Plagiarizing or Infringing, warns AI Ethics and AI Law, Forbes (Feb. 26, 2023), https://www.forbes.com/sites/lanceeliot/2023/02/26/legal-doomsday-for-generative-ai-chatgpt-if-caught-plagiarizing-or-infringing-warns-ai-ethics-and-ai-law/?sh=790aecab122b.

[22] Ron. N. Dreben, Generative Artificial Intelligence and Copyright Current Issues, Morgan Lewis (Mar. 23, 2023), https://www.morganlewis.com/pubs/2023/03/generative-artificial-intelligence-and-copyright-current-issues.

[23] Nick Vivarelli, Italy’s Ban on ChatGPT Sparks Controversy as Local Industry Spars with Silicon Valley on other Matters, Yahoo! (Apr. 3, 2023), https://www.yahoo.com/entertainment/italy-ban-chatgpt-sparks-controversy-111415503.html; Adam Rowe, Germany might Block ChatGPT over Data Security Concerns, Tech.Co (Apr. 3, 2023), https://tech.co/news/germany-chatgpt-data-security.


Perhaps Big Tech Regulation Belongs on Congress’s for You Page

Kira Le, MJLST Staffer

On Thursday, March 23, 2023, TikTok CEO Shou Zi Chew testified before a Congressional panel for five hours in order to convince Congress that the social media platform should not be banned in the United States. The hearing came one week after reports surfaced that the Committee on Foreign Investment was threatening a ban unless TikTok’s parent company ByteDance sells its share of the company.[1] Lawmakers on both sides of the aisle, as well as FBI officials, are allegedly concerned with the possibility of the Chinese government manipulating users’ experience on the platform or threatening the security of the data of its more than 150 million users in the United States.[2] Despite Chew’s testimony that TikTok plans to contract with U.S. tech giant Oracle to store U.S. data on U.S. servers on U.S. soil, preventing Chinese interference on the platform and recommending content to U.S. users through Oracle infrastructure, lawmakers were not convinced, and not a single one offered support for TikTok.[3]

In terms of what’s to come for TikTok’s future in the United States, Senator Marco Rubio updated his website on Monday, March 27, 2023 with information on “when TikTok will be banned,” claiming his proposed ANTI-SOCIAL CCP Act is the only bipartisan, bicameral legislation that would actually prevent TikTok from operating in the United States.[4] In order to cut off the platform’s access to critical functions needed to remain online, the proposed statute would require the president to use the International Emergency Economic Powers Act to block and prohibit all transactions with TikTok, ByteDance, and any subsidiary or successor within 30 days.[5] Senator Rubio explains that the proposed legislation “requires the president to block and prohibit transactions with social media companies owned or otherwise controlled by countries or entities of concern.”[6] Reuters reports that The White House supports the Senate bill known as the RESTRICT Act.[7] However, former President Trump made an almost identical attempt to ban the app in 2020.[8]TikTok was successful in quashing the effort, and would almost certainly challenge any future attempts.[9] Further, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, “To justify a TikTok ban, the government would have to demonstrate that privacy and security concerns can’t be addressed in narrower ways. The government hasn’t demonstrated this, and we doubt it could. Restricting access to a speech platform that is used by millions of Americans every day would set a dangerous precedent for regulating our digital public sphere more broadly.”[10]

Despite what Congress may want the public to think, it certainly has other options for protecting Americans and their data from Big Tech companies like TikTok. For example, nothing is stopping U.S. lawmakers from following in the footsteps of the European Parliament, which passed the Digital Markets Act just last year.[11] Although the main purpose of the Act is to limit anticompetitive conduct by large technology companies, it includes several provisions on protecting the personal data of users of defined “gatekeeper” firms. Under the Act, a gatekeeper is a company that provides services such as online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communications services; operating systems; web browsers; and online advertising services that are gateways for business to reach end users.[12] The Digital Markets Act forbids these gatekeepers from processing the personal data of end users for the purpose of providing online advertisement services, combining or cross-using their personal data, or signing users into other services in order to combine their personal data without their explicit consent.[13]

The penalties associated with violations of the Act give it some serious teeth. For noncompliance, the European Commission may impose a fine of up to 10% of the offending gatekeeper’s total worldwide turnover in the preceding year in the first instance, and up to 20% if the gatekeeper has committed the same or a similar infringement laid out in specific articles at some point in the eight preceding years.[14] For any company, not limited to gatekeepers, the Commission may impose a fine of up to 1% of total worldwide turnover in the preceding year for failing to provide the Commission with information as required by various articles in the Act. Finally, in order to compel any company to comply with specific decisions of the Commission and other articles in the regulation, the Commission may impose period penalty payments of up to 5% of the average daily worldwide turnover in the preceding year, per day.[15]

If U.S. lawmakers who have backed bipartisan legislation giving President Biden a path to ban TikTok are truly concerned about preventing the spread of misinformation on the platform, who truly believe, as Representative Gus Bilirakis claims to, that it is “literally leading to death” and that “[w]e must save our children from big tech companies” who allow harmful content to be viewed and spread without regulation, then perhaps Congress should simply: regulate it.[16] After the grueling congressional hearing, the Chinese foreign ministry stated in a regular news briefing that it has never asked companies “to collect or provide data from abroad to the Chinese government in a way that violated local laws…”[17]During his testimony, Chew also argued that TikTok is no different than other social media giants, and has even sought to put stronger safeguards in place as compared to its competitors.[18] Granted, some lawmakers have expressed support for comprehensive data privacy legislation that would apply to all tech companies.[19] Perhaps it would be more fruitful for U.S. lawmakers to focus on doing so.

Notes

[1] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[2] Id.

[3] Id.; David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[4] FAQ: When Will TikTok Be Banned?, MARCO RUBIO US SENATOR FOR FLORIDA (Mar. 27, 2023), https://www.rubio.senate.gov/public/index.cfm/press-releases?ContentRecord_id=C5313B3F-8173-4DC8-B1D9-9566F3E2595C.

[5] Id.

[6] Id.

[7] Factbox: Why a Broad US TikTok Ban is Unlikely to Take Effect Soon, REUTERS (Mar. 23, 2023), https://www.reuters.com/technology/why-broad-us-tiktok-ban-is-unlikely-take-effect-soon-2023-03-23/.

[8] Id.

[9] Id.

[10] Id.

[11] Council Regulation (EU) 2022/1925 on Contestable and Fair Markets in the Digital Sector, 2022 O.J. L 265/1 [hereinafter Digital Markets Act].

[12] Id., Art. 3, 2022 O.J. L 265/28, 30.

[13] Id. art. 5, at 33.

[14] Id. art. 30, at 51, 52.

[15] Id. art. 17, at 44.

[16] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[17] David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[18] Daniel Flatley, Five Key Moments From TikTok CEO’s Combative Hearing in Congress, BLOOMBERG (Mar. 23, 2023), https://www.bloomberg.com/news/articles/2023-03-23/five-key-moments-from-tiktok-ceo-s-combative-hearing-in-congress#xj4y7vzkg.

[19] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.


Taking Off: How the FAA Reauthorization Bill Could Keep Commercial Flights Grounded

James Challou, MJLST Staffer

The last year has been one that the airline industry is eager to forget. Not only did a record number of flight delays and cancellations occur, but the Federal Aviation Administration (FAA) suffered an extremely rare complete system outage and Southwest dealt with a holiday travel meltdown. These incidents, coupled with recent near collisions on runways, have drawn increased scrutiny from lawmakers in Congress as this year they face a September 30threauthorization deadline for the Federal Aviation Administration Reauthorization Act. And while the Federal Aviation Act is a hotly debated topic, lawmakers and industry professionals all agree that a failure to meet the reauthorization deadline could spell disaster.

The need for reauthorization arises from the structure and funding system of the FAA. Reauthorization is a partial misnomer. Though the airline industry was deregulated in 1978, the practice of FAA reauthorization originated with the Airport and Airway Revenue Act of 1970 which created the Airport and Airway Trust Fund (Trust Fund) that is used to finance FAA investments. The authority to collect taxes and to spend from the Trust Fund must be periodically reauthorized to meet agency and consumer needs. Currently, the Trust Fund provides funds for four major FAA accounts: Operations, Facilities & Equipment (F&E), Research, Engineering and Development (RE&D), and Grants-in-Aid for Airports. If the FAA’s authorization expired without an extension, then the agency would be unable to spend revenues allocated from the Trust Fund. The flip side of the unique reauthorization process is that it offers a regular opportunity for Congress to hold the FAA accountable for unfulfilled mandates, to respond to new problems in air travel, and to advocate for stronger consumer protections because enacted changes in reauthorization acts only span a set time period.

On top of the recent spate of industry complications and near disasters, Congress must sift through a myriad of other concerns and issues that pervade the airline industry for the potential upcoming reauthorization. Consumer protection has become an increasingly pressing and hot-button issue as the deluge of canceled flights in the past year left many consumers disgruntled by the treatment and compensation they received. In fact, the Consumer Federation of America and several other consumer and passengers’ right groups recently called upon the House Transportation Committee and the Senate Commerce Committee to prioritize consumer protections. Their requests include requiring compensation when consumers’ flights are delayed and canceled, holding airlines accountable for publishing unrealistic flight schedules, ending junk fee practices in air travel, including prohibiting fees for family seating and for other such services, and requiring all-in pricing, ending federal preemption of airline regulation and allowing state attorneys general and individuals to hold airlines accountable, encouraging stronger DOT enforcement of passenger protections, and prioritizing consumer voices and experiences.

However, not all are sold on enhancing consumer protections via the reauthorization process. Senator Ted Cruz, the top Republican lawmaker on the Commerce, Science, and Transportation Committee has expressed opposition to increased agency and government intervention in the airline industry, citing free market and regulatory overreach concerns. Instead, Cruz and his allies have suggested that the FAA’s technology is outdated, and their sole focus should be on modernizing it.

Indeed, it appears that in the wake of the FAA system outage most interested parties and lawmakers agree that the aging FAA technology needs updating. While at first glance one might think this provides common ground, the opinions on how to update the FAA’s technology are wide-ranging. For example, while some have flagged IT infrastructure and aviation safety systems as the FAA technology to target in order to augment the FAA’s cybersecurity capacity, others are more concerned with providing the agency direction on the status of new airspace inhabitants such as drones and air taxis to facilitate entrants into the market. Even despite cross-party assent that the FAA’s technology necessitates some level of baseline update, a lack of direction for what this means in practice remains.

Another urgent and seemingly undisputed issue that the reauthorization effort faces is FAA staffing. The FAA’s workforce has severely diminished in the past decade. Air traffic controllers, for example, number 1,000 fewer than a decade ago, and more than 10% are eligible to retire. Moreover, a shortage of technical operations employees has grown so severe that union officials have dubbed it to be approaching crisis levels. Resultingly, most lawmakers agree that expanding the FAA’s workforce is paramount.

However, despite the dearth of air traffic controllers and technical operations employees, this proposition has encountered roadblocks as well. Some lawmakers view this as a solution to increase diversity within the ranks of the FAAand offer solutions revolving around this. Currently, only 2.6% of aviation mechanics are women and 94% of aircraft pilots male and 93% of them White. Lawmakers have made several proposals intended to rectify this disparity centering around reducing the cost of entry into FAA professions. However, Republicans have largely refuted such efforts and criticized such efforts as distractions from the chief concern of safety. Additionally, worker groups continue to air concerns about displacing qualified U.S. pilot candidates and undercutting current pilot pay. Any such modifications to the FAA reauthorization bill will require bipartisan support.

Finally, a lingering battle between Democrats and Republicans regarding the confirmation of President Biden’s nominated commissioner have hampered efforts to forge a bipartisan reauthorization bill. Cruz, again spearheading the Republican contingent, has decried Biden’s nominee for possessing no aviation experience and being overly partisan. Proponents, however, have pointed out that only two of the last five commissioners have had any aviation experience and lauded the nominee’s credentials and experience in the military. The surprisingly acrid fight bodes ominously for a reauthorization bill that will have to be bipartisan and is subject to serious time constraints.

The FAA reauthorization process provides valuable insight into how Congress decides agency directives. However, while safety and technology concerns remain the joint focal point of Congress’ intent for the reauthorization bill, in practice there seems to be little common ground between lawmakers. With a September 13th deadline looming, it is increasingly important that lawmakers cooperate to collectively hammer out a reauthorization bill. Failure to do so would severely cripple the FAA and the airline industry in general.


The Future of Neurotechnology: Brain Healing or Brain Hacking?

Gordon Unzen, MJLST Staffer

Brain control and mindreading are no longer ideas confined to the realm of science fiction—such possibilities are now the focus of science in the field of neurotechnology. At the forefront of the neurotechnology revolution is Neuralink, a medical device company owned by Elon Musk. Musk envisions that his device will allow communication with a computer via the brain, restore mobility to the paralyzed and sight to the blind, create mechanisms by which memories can be saved and replayed, give rise to abilities like telepathy, and even transform humans into cyborgs to combat sentient artificial intelligence (AI) machines.[1]

Both theoretical and current applications of brain-interfacing devices, however, raise concerns about infringements upon privacy and freedom of thought, with the technology providing intimate information ripe for exploitation by governments and private companies.[2] Now is the time to consider how to address the ethical issues raised by neurotechnology so that people may responsibly enjoy its benefits.

What is Neurotechnology?

Neurotechnology describes the use of technology to understand the brain and its processes, with goals to control, repair, or improve brain functioning.[3] Neurotechnology research uses techniques that record brain activity such as functional magnetic resonance imaging (fMRI), and that stimulate the brain such as transcranial electrical stimulation (tES).[4] Both research practices and neurotechnological devices can be categorized as invasive, wherein electrodes are surgically implanted in the brain, or non-invasive, which do not require surgery.[5] Neurotechnology research is still in its infancy but development rates will likely continue accelerating with the use of increasingly advanced AI to help make sense of the data.[6]

Work in neurotechnology has already led to the proliferation of applications impacting fields from medicine to policing. Bioresorbable electronic medication speeds up nerve regeneration, deep brain stimulators function as brain pacemakers targeting symptoms of diseases like Parkinson’s, and neurofeedback visualizes brain activity for the real-time treatment of mental illnesses like depression.[7] Recently, a neurotechnological device that stimulates the spinal cord allowed a stroke patient to regain control of her arm.[8]  Electroencephalogram (EEG) headsets are used by gamers as a video game controller and by transportation services to track when a truck driver is losing focus.[9] In China, the government uses caps to scan employees’ brainwaves for signs of anxiety, rage, or fatigue.[10] “Brain-fingerprinting” technology, which analyzes whether a subject recognizes a given stimulus, has been used by India’s police since 2003 to ‘interrogate’ a suspect’s brain, although there are questions regarding the scientific validity of the practice.[11]

Current research enterprises in neurotechnology aim to push the possibilities much further. Mark Zuckerberg’s Meta financed invasive neurotechnology research using an algorithm that decoded subject’s answers to simple questions from brain activity with a 61% accuracy.[12] The long-term goal is to allow everyone to control their digital devices through thought alone.[13] Musk similarly aims to begin human trials for Neuralink devices designed to help paralyzed individuals communicate without the need for typing, and he hopes this work will eventually allow Neuralink to fully restore their mobility.[14] However, Musk has hit a roadblock in failing to acquire FDA approval for human-testing, despite claiming that Neuralink devices are safe enough that he would consider using them on his children.[15] Others expect that neurofeedback will eventually see mainstream deployment through devices akin to a fitness tracker, allowing people to constantly monitor their brain health metrics.[16]

Ethical Concerns and Neurorights

Despite the possible medical and societal benefits of neurotechnology, it would be dangerous to ignore the ethical red flags raised by devices that can observe and impose on brain functioning. In a world of increasing surveillance, the last bastion of privacy and freedom exists in the brain. This sanctuary is lost when even the brain is subject to data collection practices. Neurotechnology may expose people to dystopian thought policing and hijacking, but more subtly, could lead to widespread adverse psychological consequences as people live in constant fear of their thoughts being made public.[17]

Particularly worrisome is how current government and business practices inform the likely near-future use of data collected by neurotechnology. In law enforcement contexts such as interrogations, neurotechnology could allow the government to cause people to self-incriminate in violation of the Fifth Amendment. Private companies that collect brain data may be required to turn it over to governments, analogous to the use of Fitbit data as evidence in court.[18] If the data do not go to the government, companies may instead sell them to advertisers.[19] Even positive implementations can be taken too far. EEG headsets that allow companies to track the brain activity of transportation employees may be socially desirable, but the widespread monitoring of all employees for productivity is a plausible and sinister next step.

In light of these concerns, ethicist and lawyer Nita Farahany argues for updating human rights law to protect cognitive privacy and liberty.[20] Farahany describes a right of self-determination regarding neurotechnology to secure freedom from interference, to access the technology if desired, and to change one’s own brain by choice.[21] This libertarian perspective acknowledges the benefits of neurotechnology for which many may be willing to sacrifice privacy, while also ensuring that people have an opportunity to say no its imposition. Others take a more paternalistic approach, questioning whether further regulation is needed to limit possible neurotechnology applications. Sigal Samuel notes that cognitive-enhancing tools may create competition that requires people to either use the technology or get left behind.[22] Decisions to engage with neurotechnology thus will not be made with the freedom Farahany imagines.

Conclusion

Neurotechnology holds great promise for augmenting the human experience. The technology will likely play an increasingly significant role in treating physical disabilities and mental illnesses. In the near future, we will see the continued integration of thought as a method to control technology. We may also gain access to devices offering new cognitive abilities from better memory to telepathy. However, using this technology will require people to give up extremely private information about their brain functions to governments and companies. Regulation, whether it takes the form of a revamped notion of human rights or paternalistic lawmaking limiting the technology, is required to navigate the ethical issues raised by neurotechnology. Now is the time to act to protect privacy and liberty.

[1] Rachel Levy & Marisa Taylor, U.S. Regulators Rejected Elon Musk’s Bid to Test Brain Chips in Humans, Citing Safety Risks, Reuters (Mar. 2, 2023), https://www.reuters.com/investigates/special-report/neuralink-musk-fda/.

[2] Sigal Samuel, Your Brain May Not be Private Much Longer, Vox (Mar. 17, 2023), https://www.vox.com/future-perfect/2023/3/17/23638325/neurotechnology-ethics-neurofeedback-brain-stimulation-nita-farahany.

[3] Neurotechnology, How to Reveal the Secrets of the Human Brain?, Iberdrola,https://www.iberdrola.com/innovation/neurotechnology#:~:text=Neurotechnology%20uses%20different%20techniques%20to,implantation%20of%20electrodes%20through%20surgery(last accessed Mar. 19, 2023).

[4] Id.

[5] Id.

[6] Margaretta Colangelo, how AI is Advancing NeuroTech, Forbes (Feb. 12, 2020), https://www.forbes.com/sites/cognitiveworld/2020/02/12/how-ai-is-advancing-neurotech/?sh=277472010ab5.

[7] Advances in Neurotechnology Poised to Impact Life and Health Insurance, RGA (July 19, 2022), https://www.rgare.com/knowledge-center/media/research/advances-in-neurotechnology-poised-to-impact-life-and-health-insurance.

[8] Stroke Patient Regains Arm Control After Nine Years Using New Neurotechnology, WioNews (Feb. 22, 2023), https://www.wionews.com/trending/stroke-patients-can-regain-arm-control-using-new-neurotechnology-says-research-564285.

[9] Camilla Cavendish, Humanity is Sleepwalking into a Neurotech Disaster, Financial Times (Mar. 3, 2023), https://www.ft.com/content/e30d7c75-90a3-4980-ac71-61520504753b.

[10] Samuel, supra note 2.

[11] Id.

[12] Sigal Samuel, Facebook is Building Tech to Read your Mind. The Ethical Implications are Staggering, Vox (Aug. 5, 2019), https://www.vox.com/future-perfect/2019/8/5/20750259/facebook-ai-mind-reading-brain-computer-interface.

[13] Id.

[14] Levy & Taylor, supra note 1.

[15] Id.

[16] Manuela López Restrepo, Neurotech Could Connect Our Brains to Computers. What Could Go Wrong, Right?, NPR (Mar. 14, 2023), https://www.npr.org/2023/03/14/1163494707/neurotechnology-privacy-data-tracking-nita-farahany-battle-for-brain-book.

[17] Vanessa Bates Ramirez, Could Brain-Computer Interfaces Lead to ‘Mind Control for Good’?, Singularity Hub (Mar. 16, 2023), https://singularityhub.com/2023/03/16/mind-control-for-good-the-future-of-brain-computer-interfaces/.

[18] Restrepo, supra note 16.

[19] Samuel, supra note 12.

[20] Samuel, supra note 2.

[21] Id.

[22] Id.


Reckless, Wanton or Willful: Is Firing a “Cold Gun” Criminally Negligent?

Ben Lauter, MJLST Staffer

On October 21st, 2021, Alec Baldwin shot Halyna Hutchinson, ending her life. This tragedy was the result of Baldwin’s belief that in his hand at the time was a “cold gun.” In movie making, a cold gun is known to be a gun that does not have any live ammunition in it, or ammunition capable of endangering livelihood. Baldwin believed that he held a “cold gun” because that was what he was told when he was handed the weapon on set. He was given this disclaimer by the first director of the film, not the film’s armorer. After being handed the gun, Baldwin did not take any additional steps to confirm that the gun was indeed “cold.” Moments later, Baldwin triggered the gun to release a round striking Hutchinson and injuring another.

After this tragic event the State of New Mexico decided to bring the criminal charge of involuntary manslaughter against Baldwin and the film’s armorer, Hannah Gutierrez-Reed (the assistant director took a plea bargain and accepted probation). Both defendants are being charged with two different forms of involuntary manslaughter, one with a firearm enhancement and the other without the enhancement. This blog post will specifically examine the likely outcome for Alec Baldwin under New Mexico’s involuntary manslaughter statutes. As for Gutierrez-Reed, the suspicion is that she could be convicted on the charges brought against her given the nature of her career and alleged expertise.

The first thing to look at is the manslaughter statute and interpreting it. The passage dealing with involuntary manslaughter is Chapter 30, Article 2, Section 30-2-3. The passage codifies what is criminal when there is an unintentional homicide. The statute has different criteria for conviction depending on the conditions going on when the homicide took place. The different criteria is assigned based on if the death took place during lawful or unlawful acts.

The difference between the two is whether the defendant was doing something legal or illegal at the time they unintentionally killed someone. For example, if someone was robbing a bank and unintentionally killed someone during that time, the robber would be, at a minimum, charged with unlawful manslaughter. However, if the homicide happens in an environment where the individual is not doing anything illegal, such as driving the speed limit down the road, the charge would be lawful manslaughter. Determining whether it is lawful or unlawful manslaughter is a critical step because it determines the standard by which the defendant will be held.

In unlawful manslaughter cases, the standard the prosecution must meet is simply proving that the defendant intended to carry out the unlawful act – it would not matter if the homicide was intentional whatsoever. In a lawful manslaughter case, the standard switches to criminal negligence. Criminal negligence requires a demonstration of acting without due caution and circumspection, and/or conduct that is reckless, wanton, or willful. This standard is harder for the prosecution to prove. Essentially, a criminal negligence standard requires a conscious disregard of safety, not just a failure of reasonable care. The prosecution would have to find that Baldwin and/or Guitierrez-Reed didn’t only act with unreasonable care, which is easier, but that they consciously disregarded the safety of others when they handed, and when they used, the “cold gun.”

Applied to the case of Alec Baldwin v. New Mexico, it is likely that Baldwin will walk away without a guilty conviction. This is because Baldwin’s actions do not meet the standards to be found guilty of any form of manslaughter. First, Baldwin’s actions do not reach the strict liability threshold attached with an unlawful manslaughter charge because Baldwin was not engaged in any illegal acts at the time the homicide took place. He was on set of the film he was acting in, and taking actions necessary to make that film. Thus the State will attempt to convict Baldwin by arguing he committed involuntary manslaughter during a lawful act, which carries a criminal negligence standard. Criminal negligence, the term of art used in the statute, is frankly a misguided and confusing standard to use seeing as the common law interpretation of the statute is not a negligence standard at all. Criminal negligence is a reckless, wanton, or willful act. All three necessitate some kind of conscious action. Applied to the facts of the matter, there do not appear to be any details that indicate that Baldwin reflected upon what was going on and fired the gun anyhow. The record seemingly concludes that Baldwin was under the impression that the gun was cold and that he was going to be shooting takes for the scenes of the day. It seems unlikely that a decision-maker could conclude that Baldwin’s action ever amounted to a conscious decision over safety; an unconscious thought perhaps, but a negligent standard is not enough to lead to a conviction in a lawful act manslaughter case. Some additional rationale for the unlikely elevation to conscious disregard or reckless action revolve around the fact that Baldwin is an actor and has precedent for having no reason to believe he would be handed a live gun when he was told it is cold. An actor is considered to rely on the professionals on set.

One fact that may add an additional wrinkle is that on the day in question, instead of being handed the gun from the armorer, Gutierrez-Reed, Baldwin was handed the gun by the assistant director. This difference in protocol might trigger additional requirements for an actor to take additional steps to ensure that the gun was indeed cold, but there is a lack of case law that would suggest that is demanded. In an overall reading of the facts, Baldwin will still likely be acquitted of the charge as he did not act with criminal negligence when he fired the gun that killed Halyna Hutchinson.


Patent Venue and the Western District of Texas: Will Randomly Assigning Judges Really Change Anything?

Nina Elder, MJLST Staffer

According to the 2023 Patent Litigation Report Lex Machina released last month, Judge Alan Albright, of the Western District of Texas, heard more patent cases than any other judge in the nation. This is largely because historically Judge Albright has heard nearly all patent cases filed in his district—a district which has maintained its position as the most popular patent venue  for several years. Last July, to address concerns about Judge Albright’s monopoly over patent cases the Western District of Texas implemented a new rule requiring that judges be randomly assigned to patent cases. Some expected that patent filings in the district would “fall off a cliff” after this change, but the Lex Machine report showed that so far there hasn’t been a major decrease in the number of patent cases filed in the district. However, the question remains: will randomization have a significant effect on the distribution of patent cases in the long term?

Why Texas?

Until relatively recently, the Western District of Texas was not a particularly popular patent venue. Judge Albright’s appointment in 2018 changed that. Before becoming a judge, Albright practiced as a patent litigator for decades. He enjoys patent cases and on multiple occasions encouraged parties to file them in his court. And his efforts succeeded—the Western District of Texas had a meteoric rise in popularity after Albright was appointed, and only two years after he took the bench it went from receiving only 2.5% of patent cases filed nationwide to around 22%.

Plaintiffs have flocked to the Western District of Texas to take advantage of Judge Albright’s plaintiff friendly practices. Plaintiffs prefer his fast-moving schedules because they drive settlement negotiations and limit the time defendants have to develop their case. His patent-specific standing orders provide predictability and his years of patent experience allow for efficient resolution of issues. Albright’s procedures also make it harder for defendants to initiate inter partes review to invalidate plaintiff patents with the Patent Trial and Appeal Board, which has been called a patent death squad.

Because of the way cases are distributed in the Western District of Texas, plaintiffs can almost guarantee they will be assigned to Judge Albright if they file in the Waco division, where he is the sole judge. The district is organized into nine divisions, most with one or two judges. Federal district courts are not required to randomly assign cases and, barring unique circumstances, a case filed in a Western Texas division with only one judge will be assigned to that judge. This ability to choose provides plaintiffs with certainty as to the judge that will preside over their case – something not available in most districts. As a result, nearly all patent cases in the Western District of Texas have been handled by Judge Albright. Albright also transfers cases infrequently, meaning it is unlikely a given case will be transferred to a more defendant-friendly forum.

New Rule Requires Random Assignment

Concerns have been expressed about the monopoly Albright has on patent cases. General concerns revolve around judge shopping as it may undermine fairness and public trust in the judicial system and there is a worry that cases may be won based on procedural advantage rather than the merits. In Judge Albrights case there is unease about non-practicing entities (NPEs). NPEs, or patent trolls as they are often called, generate revenue by suing for infringement, often using abusive litigation tactics. There have been concerns that Judge Albright’s practices benefit patent trolls as after he took the bench more than 70% of new patent cases in the Western District of Texas were brought by NPEs.

In response to this issue, in November 2021 several members of the Senate Judiciary Committee’s intellectual property subcommittee wrote a letter to Chief Justice John Roberts and the Administrative Office of the U.S. Court’s Judicial Conference. While they did not name Albright, they alluded to him by noting “unseemly and inappropriate conduct in one district.” They also sent a letter to the U.S. Patent and Trademark Office expressing concern that Judge Albright repeatedly ignored binding case law and abused his direction by denying transfer motions. The Judicial Conference director, Judge Roslynn R. Mauskopf, said the office would conduct a study and noted that random case assignment safeguards judicial autonomy and prevents judge shopping.  Justice Roberts addressed the issue in his annual report and said that patent venue was one of the top issues facing the judiciary.

As a result, last July the Chief Judge of the Western District of Texas, Orlando Garcia, instituted a random assignment of patent cases filed in Waco. Under the new rule, patent cases filed in Waco are no longer automatically assigned to Judge Albright, but instead are randomly distributed to one of the 13 judges in the district.

Impacts of the New Rule

Initial reports suggested there was a decrease in patent case filings in the Western District of Texas after the new rule, but more recent Lex Machina data show that there was limited change. Though the number of patent cases on Judge Albright’s docket did decrease, it was not as great a decrease as some expected, and he still received around 50% of all patent cases filed in the district. However, this is largely because Albright is still being assigned any newly filed cases that relate to those currently on his docket. Though randomization hasn’t significantly decreased the patent cases on Albright list yet, the number of cases assigned to him over time should decrease. What remains to be seen however is whether there will be an overall decrease in patent cases filed in the Eastern District of Texas.

What Will Happen in the Future?

It is unclear how this new way of assigning cases in the Western District of Texas will impact the distribution of patent cases. Uncertainty about the behaviors of other judges in the district likely will cause a decrease in filings. There are 12 “new” judges which can preside over patent cases in the district and only five have significant intellectual property experience. Until it is clearer how the other judges in the district handle patent cases, litigants may go elsewhere. However, it is possible that the other judges will follow Albright’s lead. Two judges in the district, Kathleen Cardone and David Counts, have already adopted Albrights’ patent procedures. It is also possible litigants will simply begin targeting judges with patent experience in the district. The new rule does not require random assignment for all patent cases—only those filed in Waco. Plaintiffs can still select their desired judge, as long as it is not Albright.

Even if the Western District loses its spot at the top, Texas will likely remain a popular patent venue. Before the Western District began its rise, the Eastern District of Texas was a patent litigation epicenter. At least for the near future it seems like the Western District of Texas will remain among one of the most popular patent forums; only time will tell the larger effects of the new rule.


Reining in Big Tech

Shawn Zhang, MJLST Staffer

Introduction

On Tuesday January 24, 2023, the United States Department of Justice, along with the Attorneys General of eight states, have jointly filed a civil antitrust lawsuit against Google for monopolizing multiple digital advertising technology products in violation of Sections 1 and 2 of the Sherman Act.

Background

The Sherman Act (the Act) is the first antitrust statute of the U.S., passed in 1890 as a “comprehensive charter of economic liberty aimed at preserving free and unfettered competition as the rule of trade.” The alleged violations are for Sections 1 and 2 of the Act.

Section 1 is broad and sweeping in scope. Section 1 declares restraint of trade involving “contract, combination, or conspiracy” to be illegal. A key feature of Section 1 is that the words “contract, combination, or conspiracy” are all concerted actions that require more than one party to engage. Therefore, Section 1 cannot apply to unilateral actions. An example of such concerted action would be horizontal price fixing; multiple competitors in the same market agree with each other to set the same price for a given product. The statute then describes the penalty for violating the Act of being a maximum fine of $100 million for corporations, and/or maximum imprisonment of 10 years.

Section 2, unlike Section 1, prohibits monopolization and the language “every person” indicates that it does not require concerted action. A single entity even attempting to monopolize will be penalized. Concerted actions for monopolization or attempts to monopolize are covered as well by the language “or combine or conspire with any other person or persons.” The penalties for violations of either section can be severe, resulting in massive fines and/or imprisonment. Most enforcement actions are civil, but individuals and businesses may be prosecuted by the Department of Justice. However, criminal prosecutions are typically limited in practice.

Analysis

Google’s business model is driven primarily from their search engine services. The purpose is to deliver users the answers they are seeking. Through this search engine function, Google gains the opportunity to sell advertisements, in which Google earns huge amounts of its revenue from. With its dominance in the search engine industry, Google has obtained dominance in selling advertisements as well.

The complaint alleges that Google monopolizes key digital advertising technologies, collectively referred to as the “ad tech stack,” that website publishers depend on to sell ads. Advertisers rely on this ad tech stack to buy ads and reach potential customers. The complaint also alleges that Google has engaged in a course of anticompetitive and exclusionary conduct over the past 15 years that consists of neutralizing or eliminating ad tech competitors through acquisitions. By doing this, Google has maintained dominance in tools relied on by website publishers and online advertisers. “The Department’s landmark action against Google underscores our commitment to fighting the abuse of market power,” said Associate Attorney General Vanita Gupta. The lawsuit seeks to hold Google accountable for its “longstanding monopolies” in digital advertising technologies that content creators use to sell ads and advertisers use to buy ads on the open internet.

The key contentions to be fought over in this lawsuit includes acquiring competitors, forcing adoption of Google’s tools, distorting auction competition, and auction manipulation. The Act seeks to maintain competition in the markets and eliminate monopolies; the Department of Justice attempts to enforce the spirit of the Act by eliminating the alleged monopolistic behaviors by Google and restoring competition. The agency ultimately seeks both equitable relief on behalf of the American public as well as treble damages for losses sustained by federal government agencies that overpaid for web display advertising.

In light of the developments in antitrust laws, a company must only be found to have violated the statute when it has “engaged in practices that extend beyond competition on the merits.” The plaintiffs must prove that Google’s conduct harms competition, restrains trade, or amounts to monopolization or attempts of monopolization. It is difficult to determine whether Google has engaged in those aforementioned practices, as they could be seen as efficient business conduct. But if the Department of Justice wins the case, it could have huge implications for Google and the rest of the tech industry.

Implications for the Tech Industry

If the Department of Justice succeeds in their lawsuit, Google may face several consequences including divestiture. Microsoft was found to have violated antitrust laws in the late 1990s, and was forced to break up its company into separate companies. Another possible relief would be to force Google to allow other search engines to be the default program for devices including phones and tablets  – which the DOJ has attempted to do in the past. “Alphabet Inc.’s Google pays billions of dollars each year to Apple Inc., Samsung Electronics Co. and other telecom giants to illegally maintain its spot as the No. 1 search engine … Google’s contracts form the basis of the DOJ’s landmark antitrust lawsuit, which alleges the company has sought to maintain its online search monopoly in violation of antitrust laws.”

This case could renew the scrutiny against other tech giants such as Meta and Amazon. If the Department of Justice succeeds, it’s highly likely that they will go after other tech giants as well. The victory of the government may begin an era of tech reform, making it easier for competitors to enter the market and thus offering more options for consumers. Tech giants may be forced to reduce their prices if there are more competitors in the market, which may lead to better consumer welfare.

On the other hand, the government’s victory may harm the tech industry. Google and other tech giants are highly efficient businesses that can provide services for lower costs through economies of scale. By forcing them to split up their companies and preventing them from reaching their efficiencies, their services may become more expensive. However, efficiency is not a justification for monopolies, as monopolies largely bring more harm than benefits to consumers  by being able to impose unreasonably high prices. An example of price gouging due to monopolistic practice was when Martin Shkreli thwarted competition for the drug Daraprim (used to treat HIV patients) and increased prices from $13.50 per pill to $750.00 per pill.

Conclusion

This lawsuit will be watched closely by regulators and tech giants as it could embolden regulators to go after other companies if this attempt is successful.  Regulators are actively looking to rein in big tech companies, as evident by all the antitrust investigations in the past decades, as well as the bill targeting big tech companies currently moving through Congress. The fight between regulators and the tech industry continues, and we look forward to seeing the courts determine a fair ruling that may pave the road for a better economy with greater consumer welfare.

 


Mental Health Telehealth Services May Not Be Protecting Your Data

Tessa Wright, MJLST Staffer

The COVID-19 pandemic changed much about our daily lives, and nowhere have those changes been more visible than in the healthcare industry. During the pandemic, there were overflowing emergency rooms coupled with doctor shortages.[1] In-person medical appointments were canceled, and non-emergency patients had to wait months for appointments.[2] In response, the use of telehealth services began to increase rapidly.[3] In fact, one 2020 study found that telehealth visits accounted for less than 1% of health visits prior to the pandemic and increased to as much as 80% of visits when the pandemic was at its peak.[4] And, while the use of telehealth services has decreased slightly in recent years, it seems as though it is likely here to stay. Nowhere has the use of telehealth services been more prevalent than in mental health services.[5] Indeed, as of 2022, telehealth still represented over 36% of outpatient mental health visits.[6] Moreover, a recent study found that since 2020, over one in three mental health outpatient visits have been delivered by telehealth.[7] And while this increased use in telehealth services has helped make mental health services more affordable and accessible to many Americans, this shift in the way healthcare is provided also comes with new legal concerns that have yet to be fully addressed.

Privacy Concerns for Healthcare Providers

One of the largest concerns surrounding the increased use of telehealth in mental health services is privacy. There are several reasons for this. The primary concern has been due to the fact that telehealth takes place over the phone or via personal computers. When using personal devices, it is nearly impossible to ensure HIPAA compliance. However, the majority of healthcare providers now offer telehealth options that connect directly to their private healthcare systems, which allows for more secure data transmission.[8] While there are still concerns surrounding this issue, these secure servers have helped mitigate much of the concern.[9]

Privacy Concerns with Mental Health Apps

The other privacy concern surrounding the use of telehealth services for mental health is a little more difficult to address. This concern comes from the increased use of mental health apps. Mental health apps are mobile apps that allow users to access online talk therapy and psychiatric care.[10] With the increased use of telehealth for mental health services, there has also been an increase in the use of these mental health apps. Americans are used to their private medical information being protected by the Health Insurance Portability and Accountability Act (HIPAA).[11] HIPAA is a federal law that creates privacy rules for our medical records and other individually identifiable health information during the flow of certain health care transactions.[12] But HIPAA wasn’t designed to handle modern technology.[13] The majority of mental health apps are not covered by HIPAA rules, meaning that these tech companies can sell the private health data from their apps to third parties, with or without consent.[14] In fact, a recent study that analyzed 578 mental health-related apps found that nearly half (44%) of the apps shared users’ personal health information with third parties.[15] This personal health information can include psychiatric diagnoses and medication prescriptions, as well as other identifiers including age, gender, ethnicity, religion, credit score, etc.[16]

In fact, according to a 2022 study, a popular therapy app, BetterHelp, was among the worst offenders in terms of privacy.[17] “BetterHelp has been caught in various controversies, including a ‘bait and switch’ scam where it advertised therapists that weren’t actually on its service, poor quality of care (including trying to provide gay clients with conversion therapy), and paying YouTube influencers if their fans sign up for therapy through the app.”[18]

An example of information that does get shared is the intake questionnaire.[19] An intake questionnaire needs to be filled out on BetterHelp, or other therapy apps, in order for the customer to be matched with a provider.[20] The answers to these intake questionnaires were specifically found to have been shared by BetterHelp with an analytics company, along with the approximate location and device of the user.[21]

Another example of the type of data that is shared is metadata.[22] BetterHelp can share information about how long someone uses the app, how long the therapy sessions are, how long someone spends sending messages on the app, what times someone logs into the app, what times someone sends a message or speaks to their therapists, the approximate location of the user, how often someone opens the app, and so on.[23] According to the ACLU, data brokers, Facebook, and Google were found to be among the recipients of other information shared from BetterHelp.[24]

It is also important to note that deleting an account may not remove all of your personal information, and there is no way of knowing what data will remain.[25] It remains unclear how long sensitive information that has been collected and retained could be available for use by the app.

What Solutions Are There?

The U.S. Department of Health and Human Services recently released updated guidance on HIPAA, confirming that the HIPAA Privacy Rule does not apply to most health apps because they are not “covered entities” under the law.[26]  Additionally, the FDA put out guidance saying that it is going to use its enforcement discretion when dealing with mental health apps.[27] This means that if the privacy risk seems to be low, the FDA is not going to enforce or chase these companies.[28]

Ultimately, if mental telehealth services are here to stay, HIPAA will need to be expanded to cover the currently unregulated field of mental health apps. HIPAA and state laws would need to be specifically amended to include digital app-based platforms as covered entities.[29] These mental health apps are offering telehealth services, similar to any healthcare provider that is covered by HIPAA. Knowledge that personal data is being shared so freely by mental health apps often leads to distrust, and due to those privacy concerns, many users have lost confidence in them. In the long run, regulatory oversight would increase the pressure on these companies to show that their service can be trusted, potentially increasing their success by growing their trust with the public as well.

Notes

[1] Gary Drenik, The Future of Telehealth in a Post-Pandemic World, Forbes, (Jun. 2, 2022), https://www.forbes.com/sites/garydrenik/2022/06/02/the-future-of-telehealth-in-a-post-pandemic-world/?sh=2ce7200526e1.

[2] Id.

[3] Id.

[4] Madjid Karimi, et. al., National Survey Trends in Telehealth Use in 2021: Disparities in Utilization and Audio vs. Video Services, Office of Health Policy (Feb. 1, 2022).

[5] Shreya Tewari, How to Navigate Mental Health Apps that May Share Your Data, ACLU (Sep. 28, 2022).

[6] Justin Lo, et. al., Telehealth has Played an Outsized Role Meeting Mental Health Needs During the Covid-19 Pandemic, Kaiser Family Foundation, (Mar. 15, 2022), https://www.kff.org/coronavirus-covid-19/issue-brief/telehealth-has-played-an-outsized-role-meeting-mental-health-needs-during-the-covid-19-pandemic/.

[7] Id.

[8] Supra note 1.

[9] Id.

[10] Heather Landi, With Consumers’ Health and Privacy on the Line, do Mental Wellness Apps Need More Oversight?, Fierce Healthcare, (Apr. 21, 2021), https://www.fiercehealthcare.com/tech/consumers-health-and-privacy-line-does-digital-mental-health-market-need-more-oversight.

[11] Peter Simons, Your Mental Health Information is for Sale, Mad in America, (Feb. 20, 2023), https://www.madinamerica.com/2023/02/mental-health-information-for-sale/.

[12] Supra note 5.

[13] Supra note 11.

[14] Id.

[15] Deb Gordon, Using a Mental Health App? New Study Says Your Data May Be Shared, Forbes, (Dec. 29, 2022), https://www.forbes.com/sites/debgordon/2022/12/29/using-a-mental-health-app-new-study-says-your-data-may-be-shared/?sh=fe47a5fcad2b.

[16] Id.

[17] Supra note 11.

[18] Id.

[19] Supra note 5.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Supra note 5.

[26] Id.

[27] Supra note 10.

[28] Id.

[29] Supra note 11.