Machine Performance and Human Failure: How Shall We Regulate Autonomous Machines?

Journal of Business & Technology Law


Machines powered by artificial intelligence (“AI”) are on the rise. In many use cases, their performance today already exceeds human capabilities. In this essay, I explore fundamental regulatory issues related to such “autonomous machines.” In doing so, I adopt an analytical perspective that highlights the importance of what this article refers to as the “deep normative structure” of a particular society for crucial policy choices with respect to autonomous machines. This paper makes two principal claims. First, the jargon of welfare economics appears well-suited to analyze the chances and risks of innovative new technologies, and it is also reflected in legal doctrine on risk, responsibility and regulation. A pure welfarist conception of “the good” will tend to move a society into a direction in which autonomous systems will eventually take a prominent role. However, such a conception assumes more than the welfarist calculus can yield, and it also ignores the categorical difference between machine and human characteristic of Western legal systems. Second, taking the “deep normative structure” of Western legal systems seriously leads to policy conclusions regarding the regulation of autonomous machines that emphasize this categorical difference. Such a humanistic approach acknowledges human weaknesses and failures and protects humans. It is characterized by fundamental human rights and by the desire to achieve some level of distributive justice. Welfaristic pursuits are constrained by these humanistic features, and the severity of these constraints differs from jurisdiction to jurisdiction. The argument is illustrated with legal applications taken from various issues in the field of contract and tort.