Legal Infrastructure for Driverless Cars, and Comparisons Between the Law and Ethics of Self-Driving Cars and Autonomous Weapon Systems

(Thanks Instapundit for the link.)  Driverless cars are coming faster than most observers would have thought.  One big reason, according to Bryant Walker Smith in a recent article in Slate, is that people predicting the driverless car future assumed that they would have to be part of centrally-run systems, with corresponding changes to physical infrastructure, such as special roads embedded with magnets.  Or for that matter, we can add, centralized computers to take control of all the vehicles in the system.  The changeover has to be centralized and take place for a given area all at once; it doesn’t scale incrementally.  That was the thought, anyway, and Smith (who is a fellow at Stanford’s Center for the Internet and Society) says that as a consequence, ever “since the 1930s, self-driving cars have been just 20 years away.”

II
Self-Driving Cars

Today’s self-driving systems, however, are “intended to work with existing technologies.”  They use sensors and computers to act as individual vehicles responding to the environment around them individually, without having to be a cog in the larger machine.  This means that they can adapt to the existing infrastructures rather than requiring that they all be replaced as a whole system.  Smith’s real point, however, is to go on from physical infrastructure to include the rules of the road.  Infrastructure also includes, he says,

laws that govern motor vehicles: driver licensing requirements, rules of the road, and principles of product liability, to name but a few. One major question remains, though. Will tomorrow’s cars and trucks have to adapt to today’s legal infrastructure, or will that infrastructure adapt to them?

Smith takes up the most basic of these questions – are self-driving vehicles legal in the US?  They probably can be, he says – and he should know, as the author of a Stanford CIS White Paper that is the leading analysis of the topic.  Self-driving vehicles

must have drivers, and drivers must be able to control their vehicles—these are international requirements that date back to 1926, when horses and cattle were far more likely to be “driverless” than cars. Regardless, these rules, and many others that assume a human presence, do not necessarily prohibit vehicles from steering, braking, and accelerating by themselves. Indeed, three states—Nevada, Florida, and most recently California—have passed laws to make that conclusion explicit, at least to a point.

Still unclear, even with these early adopters, is the precise responsibility of the human user, assuming one exists. Must the “drivers” remain vigilant, their hands on the wheel and their eyes on the road? If not, what are they allowed to do inside, or outside, the vehicle? Under Nevada law, the person who tells a self-driving vehicle to drive becomes its driver. Unlike the driver of an ordinary vehicle, that person may send text messages. However, they may not “drive” drunk—even if sitting in a bar while the car is self-parking. Broadening the practical and economic appeal of self-driving vehicles may require releasing their human users from many of the current legal duties of driving.

For now, however, the appropriate role of a self-driving vehicle’s human operator is not merely a legal question; it is also a technical one. At least at normal speeds, early generations of such vehicles are likely to be joint human-computer systems; the computer may be able to direct the vehicle on certain kinds of roads in certain kinds of traffic and weather, but its human partner may need to be ready to take over in some situations, such as unexpected road works.  A great deal of research will be done on how these transitions should be managed. Consider, for example, how much time you would need to stop reading this article, look up at the road, figure out where you are and resume steering and braking. And consider how far your car would travel in that time. (Note: Do not attempt this while driving your own car.)

Technical questions like this mean it will be a while before your children are delivered to school by taxis automatically dispatched and driven by computers, or your latest online purchases arrive in a driverless delivery truck. That also means we have time to figure out some of the truly futuristic legal questions: How do you ticket a robot? Who should pay? And can it play (or drive) by different rules of the road?

The White Paper from which this article is drawn is well worth reading.  And at least if you are as deeply engaged in the legal and normative discussions surrounding autonomous weapon system as I am, it is well-nigh impossible to read this policy and legal analysis about automobiles and not ask how much this differs from the kinds of questions one would ask about weapons.  They are different in vital ways, of course.  Weapons are intended to kill people, while driverless vehicles are not, for example.  Weapons in international laws of war address the universal obligations of sides in a conflict and their conduct of hostilities, where the sides do not share common aims; driverless vehicles and their regulation runs to a society and the tradeoffs it makes for the welfare of society as a whole, including the possibility of accidents and losses caused by individuals or by technology, for another.

Despite these differences, however, when it comes to technologies for making decisions, whether in target selection and firing a weapon in the case of autonomous (or semi-autonomous) weapon systems, or decisions about whether to evaluation of risk and the decisional technologies, there are important similarities, particularly at the granular level. The ability to identify a person using weapons in battle, for example, and the ability to identify another vehicle, or a pedestrian, or a bicyclist.  Moreover, the Nevada statute Smith cites offers a legal rule for accountability – something that has deeply troubled many observers of autonomous weapons development – by treating the person who engages the self-driving system as the driver.

III
Cars and Weapons

Even just considering automobiles and driving, however, Nevada’s rule of accountability will work just so long as the driver is regarded as capable of perceiving when the human needs to take control of the self-driving vehicle and is capable of driving it adequately. In some circumstances and increasingly over time, that won’t be the case.  Gradually, this rule – which essentially treats the self-driving system as a convenience for an otherwise capable driver – will become an anachronism, as self-driving cars gradually fill with people who, over a few decades, aren’t really adequate to the tasks of driving.  After all, one of the primary social utilities of self-driving cars will not be as a convenience, but as a socially efficient way of allowing the elderly to be mobile without the risks of them driving themselves.  And at some tipping point, presumably driving skills themselves wither even among the capable.  As Gary Marcus noted in a November 2012 essay in the New Yorker, the moral and legal implications of each of those decision technologies is likely to be profound:

Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

The assumption behind this, of course, is that as a matter of technical fact, the machine is better – especially at the speeds at which these decisions must be made – than you are.  Within two or three decades, Marcus says:

the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

Marcus is likely right about that in the case of self-driving cars.  But this might well turn out to be true for at least some forms, and at least some degrees and some situations, of automated weapon systems – especially when operating at the speeds necessary to respond to enemy systems.   It is also likely to happen for weapons in exactly the way that Smith’s Slate article identifies in the case of self-driving vehicles – incrementally, using existing technologies and gradually adapting them and adapting to them over time.

This incremental evolution is an important reason why Matthew Waxman and I (in our new Policy Review essay “Law and Ethics for Robot Soldiers” and also in abbreviated form in the Hoover Institution’s Defining Ideas journal) argue for an incremental approach to the regulation of these systems, both to ensure that law and ethics are taken into account at the front end as design begins, and throughout the development process.  How far will these technologies get in the case of weapons from the standpoint of law and ethics?  I can’t foretell the future – although I believe weapons technology is likely to do over a longer run of time what the technologies of self-driving vehicles seem to be doing in a shorter-than-anticipated span of time.  Of course I don’t know for certain what the future of technology will bring.

IV
Can Human Rights Watch Foretell the Future?

I can say for certain, however, that I find quite unpersuasive Human Rights Watch’s recent report declaring flatly that machines will never, and cannot ever, be better than human beings in the operation of weapons, in a legal or ethical sense.  How on earth does it purport to know this?  How does it know this today – for the next two or three generations, over the next hundred years?  Where does it get that kind of certainty about the future direction of technology?   What about being a human rights organization gives HRW the special expertise or ability to know how technology will function decades from now?  How does being a human rights monitoring NGO, no matter how widely respected, gave it the ability to read the future, and with such certainty that it confers moral authority to call for a flat-out ban, not just on the production, deployment, or use of such systems, but a ban even on attempts to develop such systems.  That might be quite a lot of stuff banned, even at the early stages of development or even as research projects.

Gary Marcus can’t quite bring himself to say this in his New Yorker essay.  To be blunt, he can’t get himself to do more than mumble that the solution proposed by Human Rights Watch—an outright ban on ‘the development, production, and use of fully autonomous weapons’—seems wildly unrealistic.”  Yes, of course.  But the burden of his observations about self-driving cars does not point to the conclusion that even though autonomous weapons would be a terrible idea for all the reasons HRW has said, its ban solution is unrealistic (because the Pentagon won’t give up its machines).  It points instead to the conclusion that it might be true under some circumstances that the machine would make target selection and firing decisions better than humans – and just as in the future you might not be legally allowed to drive your own car and, in any case, it would be immoral of you to do so, in the future it might also be illegal and immoral of you to try and do what the autonomous or semi-autonomous weapon can do better.  And for that matter, as Ben Wittes has noted in this debate at Lawfare, it would be  immoral not to try and develop the weapon systems that can perform better, even if you can’t know in advance whether they will or not.

In any case, the lesson of self-driving cars is that these technologies are advancing incrementally, and the proper regulatory response is to regulate them incrementally.  Return to Smith’s Slate article and White Paper.  As he says, today that means, in the case of self-driving cars, allowing someone to text while the car is under automated control, but not to be drunk in the car because, after all, one still might have to assume the wheel.  But tomorrow the technology might be much improved, and the tradeoff of not risking a drunk person driving against the risks of machine control might go very differently.  Regulation has to take into account the technological state of automation; it’s a matter of degree, not an on-off switch of autonomous or not.  But the same will be true in the case of automated weapons.

V
Incremental Technological Change, Incremental Regulation

Far from Gary Marcus’ apparent belief that the Defense Department will deal with the law and ethics of autonomous weapons merely as a function of being unwilling to give up its massive R&D investment in robots, the Pentagon is gradually moving to a sensible, incrementally based approach to regulation of autonomous and semi-autonomous weapon systems, as found in its recent Directive, Autonomy in Weapon Systems (November 21, 2012).  There’s little appetite in those quarters for robot soldiers marching to war – but there is a recognition that automation can increase precision and protection in war.  But many of the decisions that weapons systems would have to make, at the granular level, will look an awful lot like those that driverless cars will have to make.  Is it a pedestrian, is it a vehicle, is it a train – is it a person armed with a gun or a child?  And truly moral decisions: all those moral philosophy hypos (allowing the trolley car to continue down its existing track and killing 200 people, or instead affirmatively diverting it down this other track and killing 20 people, for example) might turn out to have an extended life because these might indeed be real decisions.

If this were in the context of a weapon system, moral questions such as those raised by the trolley cases might cause one to say ‘case closed’ for why humans have to make such decisions.  But this arises inevitably in the self-driving car case as well.  Since it is one we can concretely imagine in advance, it’s a situation that we ought to answer in advance – either as human decisionmakers who might face this scenario or else as programmers giving instructions to a machine for how to respond.  Why should we be willing to address it in the case of self-driving cars in ways that might well program certain machine responses as our ethical decisions – but not do essentially the same in the case of weapons?  Weapons and vehicles are different, to reiterate, and there are high-level, essentially philosophical responses one can make.  But at the incremental, granular, daily level of whether any particular decision protocol is correct or not for particular circumstances, there will be a certain overlap.  Parts of these decision structures will look very similar between the two technologies.

 

 

Comments are closed.

Powered by WordPress. Designed by Woo Themes