•  
  •  
 

Abstract

Artificial Intelligence (AI) is transforming border security and law enforcement, with facial recognition technology (FRT) at the forefront of this shift. Widely adopted by U.S. federal agencies such as the FBI, ICE, and CBP, FRT is increasingly used to monitor both citizens and migrants, often without their knowledge. While this technology promises enhanced security, it’s early-stage deployment raises significant concerns about reliability, bias, and ethical data sourcing. This paper examines how FRT is being used at the U.S.-Mexico border and beyond, highlighting its potential to disproportionately target vulnerable groups and infringe on constitutional rights.

The paper provides an overview of AI’s evolution into tools like FRT that analyze facial features to identify individuals. It discusses how these systems are prone to errors—such as false positives—and disproportionately affect racial minorities. The analysis then delves into constitutional implications under the Fourth Amendment’s protection against unreasonable searches and seizures and the Fourteenth Amendment’s guarantee of equal protection. This framework is particularly relevant when considering cases like those involving Clearview AI and Rite Aid, which resulted in severe consequences for both companies and exemplify how improper facial recognition technology (FRT) deployment can lead to significant privacy violations and reinforce societal disparities.

This paper advocates for a multi-layered approach to address these challenges. It argues for halting FRT deployment until comprehensive safeguards are established, including bias mitigation measures, uniform procedures, and increased transparency. By reevaluating the relationship between law enforcement and citizens in light of emerging technologies, this paper underscores the urgent need for policies that balance national security with individual rights.

Included in

Computer Law Commons

Share

COinS