Meaningful Human Control Over Autonomous Systems
Filippo Santoni De Sio
Fully Autonomous Weapon Systems (AWS), or “killer robots”, once activated, can select and attack targets without further human intervention. AWS raise two related ethical concerns: a) it may be morally wrong to give a machine control over lethal activities, b) the use of AWS may create undesired gaps in responsibility attributions for military actions. Governmental and non-governmental actors have insisted on the ethical principle of “meaningful human control” over AWS to preserve human moral responsibility; but they have recognized the lack of a philosophical theory to give this principle a precise content. Here I present work (joint with Jeroen van den Hoven) laying the foundation of a philosophical theory of meaningful human control over autonomous systems generally, drawing demonstrations from both AWS and autonomous cars.
Our theory is based on insights from the “compatibilist” literature on free will and moral responsibility, in particular the concept of “guidance control” as elaborated by Fischer & Ravizza (1998). We aim to provide a fresh contribution to computer and robot ethics, by systematically introducing into it an analysis of control based on the philosophical literature on free will and moral responsibility. We also suggest contributions to the compatibilist theory of moral responsibility, by elaborating a new philosophical framework for understanding one particular kind of human control, i.e. meaningful human control over autonomous robotic systems.