January 2026

Proposed Rule of Evidence 707: Machine Experts

Autumn Zierman, MJLST Staffer

Citing concerns about the lack of reliability and authenticity of machine-generated evidence, the Advisory Committee on Evidence Rules (“the Committee”) published its Proposed Rule 707 (“Rule 707”) last June. Rule 707 seeks to address those instances when AI evidence is presented in court without human expert accompaniment.[1] Rule 707 intends to hold artificial intelligence that created evidence to the same standards as human experts (the Daubert standard).[2] The proposed rule is: “When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)-(d).”[3] With the notice and comment period ending on February 16th, 2026, time remains to review (and comment on) the Committee’s plan.

Susceptibility of Training Data to Flaws

The first flaw in Rule 707 is that it requires judges to become arbiter experts on the reliability of training data. The proposed rule requires courts to determine whether a machine can demonstrate reliability in how it is trained.[4] Problematically, most openly available machine learning tools or AI that may be used to generate court testimony are black box systems.[5]

The “black box” is the data set the AI is trained on to build a system capable of generating autonomous results or simulating thought.[6] It is, by design, impossible to explain how a black box system arrives at its decisions.[7] But black box systems are known to perpetuate the implicit bias of their creators because the data sets they are given to train from are inherently skewed.[8]

Certainly, the argument may be made that machines are less likely to be biased than their human expert counterparts. This argument misses a core objective of our adversarial system; juries are asked to evaluate evidence given in court for its reliability.[9] Experts may be impeached; but how do you impeach a system you know nothing about?

Possible Confrontation Clause Challenge

Considering the nature of the adversarial system, Rule 707 also raises questions regarding the Confrontation Clause. The Sixth Amendment guarantees the right of all accused to “be confronted with the witnesses against him.”[10] This manifests in a right of the accused to cross-examine the State’s witnesses against them, which requires the physical presence of a witness at the criminal trial.[11] This requirement extends, in many cases, to the experts the State relies upon in building its case.[12]

Imagine, then, the State seeks to introduce a composite sketch created by a machine with information given in witness interviews.[13] The sketch does not just assist in the investigation—it lends legitimacy to the investigation’s result. But, where a sketch artist may be cross-examined and evaluated in front of a jury, there is no way to examine the machine for the inherent bias it holds to create such a sketch. There is no way for a machine to present itself in fulfillment of the Confrontation Clause.

This flaw goes to the heart of the problem with Proposed Rule 707; it treats machines as replacements for human witnesses. Regardless of the potential machines hold for generating evidence, they cannot replace the human element that the trial system seeks to preserve.

Invitation Not a Warning

The Committee has prefaced Rule 707 as “not intended to encourage parties to opt for machine-generated over live expert witnesses.”[14] However, clever lawyers seeking a statistically based argument will view the rule as another means by which to support their client’s case. Thus, the proposed rule cuts with a double edge, either courts bury themselves having to test the reliability of each piece of AI evidence offered, or they will provide standards for broad acceptance, which opens the door to a surplusage of AI-generated evidence.

In its comment on the proposed rule, the Lawyers for Civil Justice opine that “[c]ourts and lawyers will read this as authorization, not as a hurdle or prohibition. The permissive language—‘the court may admit’—signals achievability, not restriction.”[15]

Conclusion

Rule 707 seeks to address a rising problem, reliability of AI evidence in the courtroom. But it relies on a human standard for a nonhuman problem—which opens the door to a plethora of problems arising at trial.

 

Notes

[1] Comm. on Rules of Prac. & Proc., Agenda Book, 76 (June 10, 2025), https://www.uscourts.gov/sites/default/files/document/2025-06-standing-agenda-book.pdf.pdf [hereinafter “Agenda Book”].

[2] Federal Rule of Evidence 702(a)-(d) is usually applied through Daubert analysis, which considers the following five factors: whether the theory/technique employed has (i) been tested; (ii) been subjected to peer review; (iii) an acceptable error rate; (iv) established standards controlling it’s application; and (v) is generally accepted in the scientific community. See generally Daubert v. Merrel Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[3] Agenda Book at 76.

[4] Id. at 77.

[5] Matthew Kosinski, What Is Black Box AI and How Does It Work?, IBM (Oct. 29, 2024), available at https://www.ibm.com/think/topics/black-box-ai.

[6] Id.

[7] Id.

[8] See James Holdsworth, What Is AI Bias?, IBM, https://www.ibm.com/think/topics/ai-bias (last visited Jan. 20, 2026); See also Lou Blouin, Can We Make Artificial Intelligence More Ethical?, Univ. of Mich.-Dearborn (June 14, 2021), https://umdearborn.edu/news/can-we-make-artificial-intelligence-more-ethical.

[9] Fed. R. Ev. 1008.

[10] U.S. Const. amend. VI.

[11] See generally Crawford v. Washington, 541 U.S. 36 (2004).

[12] See generally Bullcoming v. New Mexico, 564 U.S. 647 (2011) (requiring the lab technician responsible for generating a report to be present at trial for cross-examination).

[13] Kim LaCapria, Police Raise Eyebrows After Using ChatGPT to Create Composite Sketches of Suspects: ‘No One Knows How [It] Works’, The Cool Down (Dec. 10, 2025), https://www.thecooldown.com/green-business/ai-generated-police-sketch-chatgpt/.

[14] Agenda Book at 75.

[15] Lawyers for Civil Justice, Comment Letter on Proposed Rule to Proposed Rule 707 (Jan. 5, 2026), https://www.regulations.gov/comment/USC-RULES-EV-2025-0034-0013.