The proposed rule specifically deals with machine-learning outputs that are submitted as evidence without expert testimony, according to an agenda for the upcoming meeting of the Committee on Rules of Practice and Procedure. An example of a machine-learning output is a digital analysis of whether two works are substantially similar in copyright litigation.
The Advisory Committee on Evidence Rules created a proposed new Rule 707 to address reliability issues with machine-learning outputs, after determining that the issues are similar to those concerning expert testimony.
"When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702 (a)-(d). This rule does not apply to the output of basic scientific instruments," the proposed rule says.
At a November meeting in New York City, the committee considered two rules related to AI, the one dealing with machine-learning outputs and another addressing AI-manipulated video or audio clips known as "deepfakes."
The committee decided that current evidentiary rules are sufficient for dealing with deepfakes, but is keeping a draft rule on hand in case existing rules end up not being enough to handle deepfakes.
At a May meeting in Washington, the majority of advisory committee members recommended that the proposed new Rule 707 be released for public comment. The U.S. Department of Justice voted against releasing the rule for public comment.
--Additional reporting by Jeff Overley. Editing by Karin Roberts.
For a reprint of this article, please contact reprints@law360.com.