(And: thanks to Instapundit for linking to the new policy essay by Matthew Waxman and me from the Hoover Institution, referenced at the end of this post, Law and Ethics for Autonomous Weapon Systems – thanks Glenn!)
Last November, two documents appeared within a few days of each other, each addressing the emerging legal and policy issues of autonomous weapon systems – and taking strongly incompatible, indeed opposite, approaches. One was from Human Rights Watch, whose report, Losing Our Humanity: The Case Against Killer Robots, made a sweeping, preemptive, provocative call for an international treaty ban on the use, production, and development of what it defined as “fully autonomous weapons” and dubbed “Killer Robots.” Human Rights Watch has followed that up with a public campaign for signatures on a petition supporting a ban, as well as a number of publicity initiatives that (I think I can say pretty neutrally) seem as much drawn from sci-fi and pop culture as anything. It plans to launch this global campaign at an event at the House of Commons in London later in April.
The other was the Department of Defense Directive, “Autonomy in Weapon Systems” (3000.09, November 21, 2012). The Directive establishes DOD policy and “assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems … [and] establishes guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems.”
By contrast to the sweeping, preemptive treaty ban approach embraced by HRW, the DOD Directive calls for a review and regulatory process – in part an administrative expansion of the existing legal weapons review process within DOD, but reaching back to the very beginning of the research and development process. In part it aims to ensure that whatever level of autonomy a weapon system might have, and in whatever component, the autonomous function is intentional and not inadvertent, and has been subjected to design, operational, and legal review to ensure that it both complies with the laws of war in the operational environment for which it is intended – and will actually work in that operational environment as advertised. (The DOD Directive is not very long, and makes the most sense, if you are looking for an introduction into DOD’s conceptual approach, read against the background of a briefing paper issued earlier, in July 2012, by DOD’s Defense Science Board, The Role of Autonomy in DOD Systems.)
In essence, HRW seeks to ban autonomous weapon systems, rooting a ban on autonomous lethal targeting by machine per se in its interpretation of existing IHL, while calling for new affirmative treaty law specifically to codify it. By contrast, DOD adopts a regulatory approach grounded in existing processes and law of weapons and weapons reviews. Michael Schmitt and Jeffrey Thurnher offer the basic legal position underlying DOD’s approach in a new article forthcoming in Harvard National Security Journal, “‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict.” They say that autonomous weapon systems are not per se illegal under the law of weapons, and that their legality or restriction on their lawful use in any particular operational environment depends upon the usual principles of targeting law. There will be machine systems that will never be lawful for use in some operational environments or even in any operational environment – but maybe some that will.
I think Schmitt and Thurnher have it right as a legal matter – and quite clearly so – but there are important dissenting voices. A different view is offered by University of Miami’s Markus Wagner in, for example,“Autonomy in the Battlespace: Independently Operating Weapon Systems and the Law of Armed Conflict” (chapter in International Humanitarian Law and the Changing Technology of War, 2012). New School for Social Research professor Peter Asaro has offered a reading of Protocol I and other laws of armed conflict treaties aiming to show that human beings are assumed to be present as moral agents engaged in targeting in these texts (forthcoming special section of the International Review of the Red Cross). Asaro is careful to hold out only that this interpretation is implicit, rather than explicit – a thoughtful and creative reading, though not finally one that persuades the hard-hearted lex lata lawyer in me. (Asaro is not a lawyer, but a “philosopher of technology,” thus establishing himself as having the Coolest of Jobs, and also co-founder of an organization that has been calling for a ban for several years; Peter and I have cordially disagreed at several academic discussions, most recently at the outstanding WeRobot 2013 conference at Stanford Law School earlier this week.)
A debate over autonomous weapon systems is thus underway in academic law and policy – and in the Real World. It promises to heat up considerably. Much of the debate (as Peter’s and my exchange at the WeRobot 2013 conference suggests) goes to what one believes is the bedrock moral principle (and which, if true, ought to be embraced as law) for targeting and weapons. Is it per se immoral for a human being ever to be targeted autonomously by a machine that (as “full autonomy” is defined by DOD) has no human being “in” or “on” the loop, either in target selection or engagement with the target? Is a human being essential to those two actions – target selection and target engagement – and is the absence of a human being fatal to its morality, irrespective of how good or how bad the machine does at targeting only what it ought to and minimizing collateral harms? Peter takes the position that the human being is essential; my position is that the bottom-level moral principle at issue here is not whether it is a human or not a human, but whether whatever does the targeting is able to comply with the requirements of the laws of war. The “package” is simple an incident of nature, contingent, and not morally controlling.
Peter’s position, not mine, is the one taken by a number of very smart ethicists and philosophers, including, for example, Wendell Wallach, who describes a machine taking such a lethal decision “mala in se.“ University of Sheffield computer science professor Noel Sharkey (the well-known public commentator on these issues, with whom I’ve had the pleasure of friendly disagreement before and no doubt will again) also takes this position, though he also takes others that are factual in nature. But on this moral argument, the requirement of a human being is the end of the moral chain, so to speak. I don’t agree with it, but I understand the arguments driving it. HRW’s report, by contrast, launches into quite a different kind of argument, and a much more problematic one. Though it appears to accept the buck-stopping moral position, it also and mostly argues strenuously for two factual claims.
The first is that, no matter how much time goes by, as a matter of fact, machine intelligence will never be adequate to the moral decision-making that lethal targeting requires. To which, of course, the proper response is, fifty years? A hundred years? Two hundred years? Maybe HRW is right. But how does it know and what gives being a human rights monitor any special ability to see the future of technology – and tell us what to ban and not ban today, in order to ensure that a future that it purports to see does not come about? Not all of us are quite as certain about where technology might go and what it might yield – and we are quite unwilling, on HRW’s say-so, to give up the possible future social gains (including reducing harm on the battlefield) that such technologies might produce along the way because HRW foresees a future somewhere between a Philip K. Dick novel and Terminator. (Or as a friend put it, knowing Ken co-blogs with Ilya, “So who sailed from the Grey Havens and gave HRW a palantir? -ed.)
The second is that, no matter what technological developments take place, machines could never offer the affective and emotional qualities that targeting decisions in war do and properly should require on the battlefield – sympathy, empathy, compassion. Again, this is a factual claim about the future of machine intelligence – a prediction extending into the future, forever – that leaves one to ask, how does HRW claim to know any such thing? And it’s a particularly peculiar claim coming from a human rights monitor whose bread and butter in armed conflict reporting not infrequently involves things soldiers did on the battlefield because of fear, desire for vengeance, simple bad judgment from cold and hunger, and the limits of human cognition in the fog of war – a conspicuous, yet all-too-human, absence of empathy and compassion. One wonders why HRW didn’t just as easily focus on those less praise-worthy human emotions and at least entertain the possibility that a machine that has no emotions either way, but which might be programmed to behave in ways that respect the humanity of non-combatants and, further, might be programmed to simply sacrifice itself in order to spare non-combatants, might after all said and done be a very good thing.
In conversations with HRW, I’ve been told, and encouraged to note publicly, that it does not want its report and call for a ban to be understood in extreme ways. I’m happy to do that, with one caveat. So, for example, it does not mean everything one might read its call for a ban on “development” of fully autonomous weapons to say. It also appears to want to find a way not to be interpreted as declaring the future history of technology, though that appears more difficult, given the language of the report. My (genuine) advice to HRW on this point (though not my view, of course) is to say that it’s not predicting where technology will and won’t go, as a matter of necessity. Instead, it’s saying that, in its judgment, it is overwhelmingly likely that all these bad scenarios would emerge over the long run – and that these scenarios are sufficiently bad to justify banning all these many things today. Continue reading ‘The Debate About to Heat Up Over HRW’s Call to Ban “Killer Robots,” AKA Autonomous Weapon Systems’ »