Forensic Evidence

Privacy at Risk: Analyzing DHS AI Surveillance Investments

Noah Miller, MJLST Staffer

The concept of widespread surveillance of public areas monitored by artificial intelligence (“AI”) may sound like it comes right out of a dystopian novel, but key investments by the Department of Homeland Security (“DHS”) could make this a reality. Under the Biden Administration, the U.S. has acted quickly and strategically to adopt artificial intelligence as a tool to realize national security objectives.[1] In furtherance of President Biden’s executive goals concerning AI, the Department of Homeland Security has been making investments in surveillance systems that utilize AI algorithms.

Despite the substantial interest in protecting national security, Patrick Toomey, deputy director of the ACLU National Security Project, has criticized the Biden administration for allowing national security agencies to “police themselves as they increasingly subject people in the United States to powerful new technologies.”[2] Notably, these investments have not been tailored towards high-security locations—like airports. Instead, these investments include surveillance in “soft targets”—high-traffic areas with limited security: “Examples include shopping areas, transit facilities, and open-air tourist attractions.”[3] Currently, due to the number of people required to review footage, surveilling most public areas is infeasible; however, emerging AI algorithms would allow for this work to be done automatically. While enhancing security protections in soft targets is a noble and possibly desirable initiative, the potential privacy ramifications of widespread autonomous AI surveillance are extreme. Current Fourth Amendment jurisprudence offers little resistance to this form of surveillance, and the DHS has both been developing this surveillance technology themselves and outsourcing these projects to private corporations.

To foster innovation to combat threats to soft targets, the DHS has created a center called Soft Target Engineering to Neutralize the Threat Reality (“SENTRY”).[4] Of the research areas at SENTRY, one area includes developing “real-time management of threat detection and mitigation.”[5] One project, in this research area, seeks to create AI algorithms that can detect threats in public and crowded areas.[6] Once the algorithm has detected a threat, the particular incident would be sent to a human for confirmation.[7] This would be a substantially more efficient form of surveillance than is currently widely available.

Along with the research conducted through SENTRY, DHS has been making investments in private companies to develop AI surveillance technologies through the Silicon Valley Innovation Program (“SVIP”).[8] Through the SVIP, the DHS has awarded three companies with funding to develop AI surveillance technologies that can detect “anomalous events via video feeds” to improve security in soft targets: Flux Tensor, Lauretta AI, and Analytical AI.[9] First, Flux Tensor currently has demo pilot-ready prototype video feeds that apply “flexible object detection algorithms” to track and pinpoint movements of interest.[10] The technology is used to distinguish human movements and actions from the environment—i.e. weather, glare, and camera movements.[11] Second, Lauretta AI is adjusting their established activity recognition AI to utilize “multiple data points per subject to minimize false alerts.”[12] The technology generates automated reports periodically of detected incidents that are categorized by their relative severity.[13] Third, Analytical AI is in the proof of concept demo phase with AI algorithms that can autonomously track objects in relation to people within a perimeter.[14] The company has already created algorithms that can screen for prohibited items and “on-person threats” (i.e. weapons).[15] All of these technologies are currently in early stages, so the DHS is unlikely to utilize these technologies in the imminent future.

Assuming these AI algorithms are effective and come to fruition, current Fourth Amendment protections seem insufficient to protect against rampant usage of AI surveillance in public areas. In Kyllo v. United States, the Court placed an important limit on law enforcement use of new technologies. The Court held that when new sense-enhancing technology, not in general public use, was utilized to obtain information from a constitutionally protected area, the use of the new technology constitutes a search.[16] Unlike in Kyllo, where the police used thermal imaging to obtain temperature levels on various areas of a house, people subject to AI surveillance in public areas would not be in constitutionally protected areas.[17] Being that people subject to this surveillance would be in public places, they would not have a reasonable expectation of privacy in their movements; therefore, this form of surveillance likely would not constitute a search under prominent Fourth Amendment search analysis.[18]

While the scope and accuracy of this new technology are still to be determined, policymakers and agencies need to implement proper safeguards and proceed cautiously. In the best scenario, this technology can keep citizens safe while mitigating the impact on the public’s privacy interests. In the worst scenario, this technology could effectively turn our public spaces into security checkpoints. Regardless of how relevant actors proceed, this new technology would likely result in at least some decline in the public’s privacy interests. Policymakers should not make a Faustian bargain for the sake of maintaining social order.

 

Notes

[1] See generally Joseph R. Biden Jr., Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence, The White House (Oct. 24, 2024), https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/ (explaining how the executive branch intends to utilize artificial intelligence in relation to national security).

[2] ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections, ACLU (Oct. 24, 2024, 12:00 PM), https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections.

[3] Jay Stanley, DHS Focus on “Soft Targets” Risks Out-of-Control Surveillance, ALCU (Oct. 24, 2024), https://www.aclu.org/news/privacy-technology/dhs-focus-on-soft-targets-risks-out-of-control-surveillance.

[4] See Overview, SENTRY, https://sentry.northeastern.edu/overview/#VSF.

[5] Real-Time Management of Threat Detection and Mitigation, SENTRY, https://sentry.northeastern.edu/research/ real-time-threat-detection-and-mitigation/.

[6] See An Artificial Intelligence-Driven Threat Detection and Real-Time Visualization System in Crowded Places, SENTRY, https://sentry.northeastern.edu/research-project/an-artificial-intelligence-driven-threat-detection-and-real-time-visualization-system-in-crowded-places/.

[7] See Id.

[8] See, e.g., SVIP Portfolio and Performers, DHS, https://www.dhs.gov/science-and-technology/svip-portfolio.

[9] Id.

[10] See Securing Soft Targets, DHS, https://www.dhs.gov/science-and-technology/securing-soft-targets.

[11] See pFlux Technology, Flux Tensor, https://fluxtensor.com/technology/.

[12] See Securing Soft Targets, supra note 10.

[13] See Security, Lauretta AI, https://lauretta.io/technologies/security/.

[14] See Securing Soft Targets, supra note 10.

[15] See Technology, Analytical AI, https://www.analyticalai.com/technology.

[16] Kyllo v. United States, 533 U.S. 27, 33 (2001).

[17] Cf. Id.

[18] See generally, Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring) (explaining the test for whether someone may rely on an expectation of privacy).

 

 


DNA Testing and Death: How Decades-Long Procedural Battles Determine Who Has to Die

Alexa Johnson-Gomez, MJLST Staffer

When individuals convicted of murder claim actual innocence, crime-scene DNA testing has, many times over, been dispositive in proving such innocence. Intuitively, we assume that if someone has been wrongfully convicted, DNA will be the bringer of truth. But what happens when a defendant cannot get their requested DNA testing because the State argues their claim is procedurally defaulted or barred by the statute of limitations?

Reed v. Goertz is a case in the current U.S. Supreme Court term. Petitioner Rodney Reed argues that his due process rights were violated by a refusal to complete DNA testing after he filed post conviction petitions for relief. While the facts are fairly case-specific and relate to Texas criminal procedure, the Court’s holding in this case could have important implications for when the clock starts to run on petitions for crime-scene DNA testing, as well as for death-row claims of actual innocence more generally.

Back in 1998, a Texas court convicted Rodney Reed of the murder of Stacey Stites; the evidentiary basis for this conviction was solely the presence of his sperm.[1] Reed has maintained his innocence since trial, explaining that his sperm was present because he was having a secret, long-standing affair with Stites.[2] At trial, Reed theorized that the murderer might have been the man Stites was engaged to, who was perhaps retaliating against Stites, a white woman, for having an affair with Reed, a Black man.

In 2014, Reed sought post conviction DNA testing under Chapter 64 of the Texas Code of Criminal Procedure. This provision allows a convicted person to obtain post conviction DNA testing of biological material if the court finds that certain conditions are met.[3] The state trial court denied this motion in November 2014, on the grounds that Reed failed to prove by a preponderance of the evidence that he would not have been convicted but for exculpatory results. Reed appealed the denial, and the appellate court remanded for additional fact finding. Then in September 2016, after additional fact finding was done, the state trial court denied the post conviction DNA testing yet again. The appellate court affirmed the denial in April 2017 and denied rehearing in October 2017.

At this stage, Reed filed a 42 U.S.C. § 1983 complaint against the prosecuting attorney, challenging the constitutionality of Chapter 64 both on its face and as applied to his case.[4] The district court dismissed all of Reed’s claims for failure to state a claim; the Fifth Circuit affirmed in April 2021, stating that Reed’s claim was untimely and that Reed knew or should have known of his injury in November 2014. Generally, time bars in post conviction follow a common principle: if a defendant did know or should have known of a claim, that is the point at which the clock starts running. Defense counsel argues that the clock began to run in October 2017, after Reed exhausted his post conviction appeals fully.

At oral argument on October 11, 2022, the state argued that the clock started prior to the rehearing date in October 2017. Justice Kagan reasoned that it would be simpler to acknowledge we do not know what the authoritative construction of a court of appeals is until appeals are concluded. Justice Jackson agreed, noting that if the federal clock starts while the state appeals process is still ongoing, then the federal courts would have to pause consideration to allow state courts to weigh in first. This would be untenable and overly chaotic. Defense counsel reminded the court of the mounting evidence that points at Reed’s innocence, evidence which is still under review.

While not the hottest topic of this Supreme Court term, this case could still have important implications. While the use of DNA testing to prove actual innocence has been a practice in the world of litigation for the past few decades, cases that have yet to get their post conviction DNA testing done, like Reed’s, often stand in such perilous status because of procedural bars.

A haunting example—the recent execution of Murray Hooper in Arizona. 76 years old at the time of his death, Hooper maintained his innocence until his day of execution.[5] There was never any forensic testing in Hooper’s case that proved he conclusively committed the murders. Hooper’s lawyers filed appeals to get newly discovered evidence considered and forensic testing completed,[6] yet these petitions were all denied.

In theory, post conviction and habeas relief are meant to be reserved for the most deserving of defendants. The courts do not want to allow convicted murderers chance after chance at getting a conviction or sentence overturned, and there is, of course, the presumption that any conviction was right the first time. Yet the high procedural barrier to bringing such claims is not in line with the reality of wrongful convictions. Since 1973, 190 death-row inmates have been exonerated.[7]Post conviction DNA testing is not merely allowing defendants to draw out their appeals process and stave off execution, but is an important scientific tool that can check if the trial court got it right. Preventing petitioners from accessing DNA testing just because of procedural barriers is an injustice, and hopefully the Supreme Court rules as such in Reed v. Goertz.

Notes

[1] Innocence Staff, 10 Facts About Rodney Reed’s Case You Need to Know, Innocence Project (Oct. 11, 2019), https://innocenceproject.org/10-facts-you-need-to-know-about-rodney-reed-who-is-scheduled-for-execution-on-november-20/.

[2] Amy Howe, Justices Wrestle with Statute of Limitations in Rodney Reed’s Effort to Revive DNA Lawsuit, SCOTUSblog (Oct. 11, 2022), https://www.scotusblog.com/2022/10/justices-wrestle-with-statute-of-limitations-in-rodney-reeds-effort-to-revive-dna-lawsuit/.

[3] See Tex. Code Crim. Proc. Ann. § 64.03.

[4] Reed v. Goertz, 995 F.3d 425, 428 (5th Cir. 2021).

[5] Liliana Segura, Out of Time, The Intercept (Nov. 15, 2022), https://theintercept.com/2022/11/15/murray-hooper-arizona-execution/.

[6] Associated Press, Lawyers for Murray Hooper File New Appeal as Execution Date Nears, Fox 10 (Nov. 1, 2022),https://www.fox10phoenix.com/news/lawyers-for-murray-hooper-file-new-appeal-as-execution-date-nears.

[7] Innocence, Death Penalty Information Center, https://deathpenaltyinfo.org/policy-issues/innocence (last visited Nov. 27, 2022).


Election Security: US Lawmakers Concerned “Deepfake” Videos Are the Next Stage of Information Warfare Ahead of 2020 Election

By: Jack Kall

The nation’s attention has turned to the 2020 election with the 2018 midterms in the rear view mirror. Accordingly, an increasing number of US lawmakers are concerned that a form of video manipulation known as “Deepfakes” will be the next stage of information warfare. In short, Deepfake videos are hyper-realistic manipulated videos made using artificial intelligence technology. The videos are often convincing enough that it can be difficult to even tell what has or has not been manipulated. To raise attention, BuzzFeed published this video of Barack Obama delivering a public service announcement regarding dangers of the technology—except it was actually Jordan Peele.

Election security is a more important issue for US voters in the wake of Russian-led election interference in the 2016 Presidential Election. A recent Pew Research poll found that 55% of Americans say they are not too (37%) or not at all (17%) confident that elections systems are secure from hacking and other technological threats. Republicans (59% at least somewhat confident in security) express greater confidence than Democrats (34%), which is a reversal of attitudes from 2016.

While the threat of deepfakes has not garnered the same attention as Russian interference and other forms of “Fake News,” some US legislators are beginning to vocalize concern. This past September, three members of the House of Representatives—including the new chair of the House Intelligence Committee Rep. Adam Schiff (D-CA)—sent a letter expressing concern that the “technology could soon be deployed by malicious foreign actors” to the Director of National Intelligence Dan Coates. Senator Marco Rubio (R-FL) also displayed concern for the technology at a Senate Intelligence Committee by describing a scenario in which a deepfake video is released just before an election and going viral before analysts could determine it was fake.

While concern is rising, there is still a shortage of solutions. In January 2019, House Democrats unveiled several election security measures, but lacked solutions for deepfakes. The same month, Brookings Institute released advice for campaigns to protect against deepfakes. It remains to be seen whether Brooking Institute’s advice to protect infrastructure, add two-factor authentication, film the candidate at speaking engagements, and replicate a classified environment—while important general advice—is enough to protect against this ever-evolving deepfake technology.


Chimeras in DNA Forensic Testing: What to Do?

by Ryan J. Connell, UMN Law Student, MJLST Staff

Thumbnail-Ryan-Connell.jpgThe answer as suggested in an essay titled Chimeric Criminals by David H. Kaye in the current issue of the Minnesota Journal of Law, Science and Technology is not to worry about it too much.

The article criticizes the book Genetic Justice: DNA Databanks, Criminal Investigations, and Civil Liberties by Sheldon Krimsky and Tania Simoncelli. The book has latched on to a particular genetic anomaly referred to as chimerism. Chimerism denotes the presence of two genetically distinct cell lines in the human body. The authors of Genetic Justice want to use this rare condition to show that the supposed assumption that DNA profiling is infallible is incorrect.

Think for a moment about what DNA evidence has done in criminal law. Do not just think of the convictions, but think of the acquittals, and think of those freed from incarceration by innocence projects around the country that can be attributed to the use of DNA evidence. To call DNA evidence into question over such a rare and insignificant condition such a chimerism stretches the confines of reasonableness. Genetic Justice proffers that there is a 1/2400, 1/10, 1/8, and 1/1 incidence of chimerism. Other estimates are no better. A 2010 article in the Globe and Mail entitled “The Dark Side of DNA” called DNA evidence into question and offered that chimerism may be present in anywhere from a tiny population to ten percent of the population. If an entire science is going to be called into question some better statistics might be advisable first.

This book and other sources, such as “Expert evidence: the genetic chimerism and its implications for the world of law” by Daniel Bezerra Bevenuto assume that if genetic evidence is gathered and then does not match the defendant’s DNA that the courts and lawyers will simply dismiss the case. I think courts can handle whatever problems chimerism presents. If DNA is recovered at a crime scene and identifies person X and said person is chimeric and the reference sample he provides doesn’t match the sample recovered at the crime scene the court will rightly be concerned. The natural and simple remedy to this solution is just to test again. Normally chimeric cells are isolated so a second reference sample taken from the suspect should resolve the anomaly.

Chimerism does not preset the problem that the authors of Genetic Justice suggest. It is a rare occurrence that a DNA sample recovered at a crime scene doesn’t match the DNA of the suspect it identifies. And even in those rare circumstances where the DNA doesn’t match it is an easy fix. For a more detailed analysis of this issue please read the article by David H. Kaye in the Minnesota Journal of Law Science and Technology.

The full issue of MJLST in which David Kaye’s article appears can be found here.