•  
  •  
 

Authors

Ian Maddox

Abstract

From traditional methods like ballistics and fingerprinting, to the probabilistic genotyping models of the twenty-first century, the forensic laboratory has evolved into a cutting-edge area of scientific exploration. This rapid growth in forensic technologies will not stop here. Considering recent developments in artificial intelligence (“AI”), future forensic tools will likely become increasingly sophisticated. To be sure, AI-enabled forensic tools are far from theoretical; AI applications in the forensic sciences have already emerged in practice. Machine learning-enabled acoustic gunshot detectors, facial recognition software, and a variety of pattern recognition learning models are already disrupting law enforcement operations across the country. Soon, criminal defendants will need to learn how to navigate a courtroom dominated by AI-enabled expert systems. Unfortunately, there is little guidance in the caselaw or in the Federal Rules of Evidence on how exactly criminal defendants should approach AI as evidence in the courtroom. Although a handful of scholars have taken up the task of exploring the intersection of AI and evidence law, these studies have primarily focused on issues in authentication or issues with applying the Daubert standard to AI evidence. This study contributes to this ongoing exploration of AI in the courtroom by providing an analysis of the rights of criminal defendants facing AI-generated testimony under the Confrontation Clause of the Sixth Amendment. This study will illustrate that, in a future where AI-enabled forensic tools are increasingly used to inculpate defendants in criminal prosecutions, the right to confrontation will become increasingly eroded. This is largely because courts have carved out a broad “machine-generated data” exception to the Confrontation Clause. Under this exception, data generated by a sufficiently autonomous machine will fall outside the ambit of constitutional protection. The rationale is that such transmissions are too autonomous to be attributed to any human actor, and the Confrontation Clause protects only statements made by a human rather than a machine learning model. This exception to the right to confrontation is significant. Practically, these limitations could have a measurable negative impact on a defendant’s capacity to test the reliability of an AI model in court. Normatively, this study illustrates that, in a world where AI algorithms proffer inculpatory evidence of criminal wrongdoing, the right to confrontation adds little value for criminal defendants. As courts and scholars reinterpret and refine the rules of evidence to better reflect technological realities, some attention should be given to the proper place of the right to confrontation.

Included in

Evidence Commons

Share

COinS