•  
  •  
 

Abstract

Recent advances in artificial intelligence have set off a frenzy of commercial activity, with companies fearful that they may fall behind if they are unable to quickly incorporate the new technology into their products or their internal processes. At the same time, numerous scholars from the machine learning community have warned of the fundamental risks that uninhibited use of artificial intelligence poses to society. The question is not whether artificial intelligence will cause harm, but when, and how. The certainty of future harm necessitates that legal scholars and practitioners examine the liability implications of artificial intelligence. While this topic has been given increasing focus in the literature, such discussion is lacking in two key ways. First, there has been little attempt to consolidate the literature on the range of legal theories that might apply to harm resulting from the use of artificial intelligence. Second, the literature has failed to address the role that contracting may play in reducing uncertainty around liability and overriding common law approaches. This paper addresses both gaps in the literature and provides legal practitioners with an overview of key considerations related to liability allocation when contracting for artificial intelligence technology. Part I of the paper begins by briefly discussing the risks inherent in the use of artificial intelligence, including in particular risks resulting from a lack of transparency and explainability, and the harms that might result. Part II of the paper distills past legal scholarship on the legal theories that might apply when harm results from the use of artificial intelligence. The theories analyzed include vicarious liability, products liability and negligence. Relevant distinctions between artificial intelligence and software are discussed as they relate to the application of products liability and negligence theories in particular. Part II closes by highlighting that the current uncertainty in the legal landscape for artificial intelligence liability incentivizes contracting parties to address liability directly within their contracts. Part III of the paper then proceeds to provide an overview of important considerations for contracting parties when using contractual apportionment of liability to reduce uncertainty around harm resulting from the use of artificial intelligence. These considerations are organized by contracting phase and by relevant contracting section.

Share

COinS