Abstract: Machines powered by artificial intelligence (AI) are on the rise. In many use cases, their performance today already exceeds human capabilities. In this essay, I explore fundamental regulatory issues related to such “autonomous machines”. I adopt an analytical perspective that highlights the importance of what I call the “deep normative structure” of a particular society for crucial policy choices with respect to autonomous machines. I make two principal claims. First, the jargon of welfare economics appears well-suited to analyse the chances and risks of innovative new technologies, and it is also reflected in legal doctrine on risk, responsibility and regulation. A pure welfarist conception of “the good” will tend to move a society into a direction in which autonomous systems eventually will take a super-prominent role. However, such a conception assumes more than the welfarist calculus can yield, and it also ignores the categorical difference between machines and humans characteristic of Western legal systems. Second, taking the “deep normative structure” of Western legal systems seriously leads to policy conclusions regarding the regulation of autonomous machines that emphasize this categorical difference. Such a humanistic approach acknowledges human weaknesses and failures and protects humans, and it is characterized by fundamental human rights and by the desire to achieve some level of distributive justice. Welfaristic pursuits are constrained by these humanistic features, and the severity of these constraints differs from jurisdiction to jurisdiction. I illustrate my argument with legal applications taken from various issues in the field of contract and tort.