Layla G. Maurer


AI offers “huge and wide-reaching potential” in health; the futures of health care and AI are deeply interconnected. Use of AI provides the field with never-before imagined opportunities to streamline and delve more deeply into medical care, including disease identification, diagnosing conditions, and a simpler way to crowdsource and develop treatment plans. Its broad inclusion in the field has created a pressing need for more, and better, regulation. Improved regulation is especially critical because of the possibility that mismanaged AI will allow for incorrect diagnosis of patients or biased predictions and outcomes. In fact, numerous examples of such bias – and attempts to manage bias – already exist, which raises major ethical questions surrounding the use of AI and presents the issue of how to avoid health disparities in AI.

In this Note, I argue that AI is not being adequately managed at the federal level. I further argue that the lack of management is largely due to a general failure to mandate standards for data sourcing, cleaning, and testing. The health care field is rife with examples of the effects of poor management, some of which have immediate and devastating impacts on patients; however, mismanagement of AI is not limited to health care alone. The potential problems that arise from lack of oversight span across industry lines. Thus, no single industry or existing federal agency can claim full ownership of, or expertise in, AI as a tool. I therefore propose that the best possible solution would be to form an entirely new top-level federal agency. This new agency would be tasked with creating federally mandated standards for ethical AI data sourcing, cleaning, and testing across industries. It would provide comprehensive management of AI datasets that do not fall under the umbrella of an existing agency such as the Food & Drug Administration (FDA). I further propose that the new regulatory body be named the “Department of Artificial Intelligence Standardization,” or DAIS.