Law and Ethics for Robot Soldiers

Law and Ethics for Robot Soldiers is the title of a new essay by Matthew Waxman and me; it will appear in Policy Review down the road, but we have posted to SSRN an annotated and footnoted version that we hope will be useful to students, researchers, and scholars.

The regulation of lethal autonomous weapons can be approached from two directions. One is to look from the front-end – starting from where technology stands today, forward across the evolution of the technology, but focused on the incremental changes as and how they occur, and especially how they are occurring now. The other is to imagine the end-state – the necessarily speculative and sometimes pure sci-fi “robot soldiers” of this post’s title – and look backwards to the present. If we start from the hypothetical technological end-point – a genuinely “autonomous,” decision-making robot weapon, rather than merely a highly “automated” one – the basic regulatory issue is, what tests of law and ethics would an autonomous weapon have to pass in order to be a lawful system, beginning with fundamental law of war principles such as distinction and proportionality? What would such a weapon be and how would it have to operate to satisfy those tests?

This is an important conceptual exercise as technological innovators imagine and work toward autonomy in many different robotic applications, in which weapons technology is only one line of inquiry. Imagining the technological end-point as law and ethics means, more or less, hypothesizing what we might call the “ethical Turing Test” for a robot soldier: What must it be able to do, and how must it be able to behave, in order to make it indistinguishable for its morally ideal human counterpart? The idealized conceptualization of the ethically defensible autonomous weapon forces us to ask questions today about fundamental issues – who or what is accountable, for example, or how does one turn proportionality judgments into an algorithm? Might a system in which lethal decisions are made entirely by machine, with no human in the firing loop, violate some fundamental moral principle?

All these and more are important questions. The problem in starting with them, however, is that the technology driving toward autonomous weapons is proceeding in little tiny steps – not gigantic ones that immediately implicate these fundamental questions of full autonomy. (And some very important critics – their enthusiasm tempered by earlier promises of artificial intelligence that failed to deliver – question whether the tiny little steps can ever get to genuine autonomy. Others question whether there will ever be any real appetite among military planners to embrace full autonomy, distinct from automated systems that nonetheless keep the human centrally in the firing loop, and not merely notionally so.)

The systems being automated first are frequently not the weapons themselves, but instead other parts of the system. But they might eventually carry the weapons in train – and that might conceivably happen whether there is any separate appetite for highly automated or autonomous weapons as an independent matter. Thus, for example, as fighter aircraft become increasingly automated in how they are flown – in order to compete with enemy aircraft also becoming more automated – eventually important parts of the flight functions operate faster than humans can. In that case, however, it looks irresistible to automate, if not make fully autonomous, the weapons systems, because they have to be integrated with the whole aircraft and all its systems. We didn’t start out intending to automate the weapons – but we wound up there because the weapons are part of a whole aircraft system.

The facts about how technology of automation is evolving are important for questions of regulating and assessing the legality of new weapons systems. In effect, they shift the focus away from imagining the fully autonomous robot soldier and the legal and ethical tests it would have to meet to be lawful – back to the front end, the margin of evolving technology today. The bit-by-bit evolution of the technology urges a gradualist approach to regulation; incremental advances in automation of systems that have implications for weapons need to be considered from a regulatory standpoint that is itself gradualist and able to adapt to incremental innovation. For that basic reason, Matt’s and my paper takes as its premise the need to think incrementally about the regulation of evolving automation.

The essay’s takeaway on regulation is ultimately a modest one – a quite traditional (at least from the US government’s long-term perspective) approach to weapons regulation. Grand treaties seem to us unlikely to be suitable to incremental technological change, particularly as they might seek to imagine a technological end-state that might come about as anticipated, but might develop in some quite unexpected way. Sweeping and categorical pronouncements can re-state fundamental principles of the laws of war, but they are unlikely to be very useful in addressing the highly specific and contingent facts of particular systems undergoing automation.

We urge, instead, a gradually evolving pattern of practices of the states developing such systems. And as part of the process of legal review of weapons systems, development through reasoned articulation of how and why highly particular, technically detailed weapons systems meet fundamental legal standards. In effect, this proposes that states develop bodies of evolving state practice – sometimes agreeing with other states and their practices, but likely other times disagreeing. This seems to us the most suitable means for developing legal standards for the long term to address evolving weapons technology. Abstract below the fold.

Abstract:

Lethal autonomous machines will inevitably enter the future battlefield — but they will do so incrementally, one small step at a time. The combination of inevitable and incremental development raises not only complex strategic and operational questions but also profound legal and ethical ones. The inevitability of these technologies comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply “smarter” machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one’s own personnel and civilian persons and property will demand continuing research, development, and deployment.

The process will be incremental because non-lethal robotic systems (already proliferating on the battlefield) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe — but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely but slowly diminish. Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them — U.S. policy for resolving those dilemmas should be built on these assumptions.

The certain yet gradual development and deployment of these systems, as well as the humanitarian advantages created by the precision of some systems, make some proposed responses — such as prohibitory treaties — unworkable as well as ethically questionable. Those features also make it imperative, though, that the United States resist its own impulses toward secrecy and reticence with respect to military technologies, recognizing that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain — the contours of international law as well as international expectations about appropriate conduct — on which it and others will operate militarily as technology evolves. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental. The United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective or dangerous prohibitions — or those who would prefer few or no constraints at all.

(Annotated version of an essay to appear in a general interest journal that does not use footnotes in its articles; sources have been added here for scholarly convenience.)

Powered by WordPress. Designed by Woo Themes