Archive | Robotics

RoboticsAlley Expo and a Few Thoughts Re the Regulatory Future of Robotics

I’m in Minneapolis the next two days, taking part in a terrific industry expo show, RoboticsAlley. It covers the broad range of robotics, from industrial robots to health care and assistive robots, along with a number of exhibitors from the electronics industries. Baxter-the-robot is here. It also covers drones and self-driving cars – the UAV industry association, AUVSI, is one of the sponsors – so it is a pretty wide-ranging trade show. Likewise the various presentations, panel discussions, etc. – an excellent panel on self-driving cars, for example. Many of the presentations have focused not just on technology, but on the economics of these machines; Baxter, for example, represents a price breakthrough in a two armed robot with a screen for a face, at $22,000, but, as a panelist observed, it probably needs to be half that price in order to attract medium to small manufacturing or assembly businesses to experiment with it.

Another aspect of the economics of robotics, however, is investment into the companies bringing them from the lab to market. A number of recent news reports in the business press have remarked on falling venture capital interest in certain sectors, particularly medical devices and assistive living technologies. Part of this might be fueled by new taxes on medical devices that depress investment and innovation, but several speakers here suggested that the investment situation is not clear, even in the specific sector of medical and assistive devices. I was interested to see, in regard to the investment climate, the creation of a new Nasdaq ETF (ROBO) that tracks an index of publicly traded robotics and automation companies. Frank Tobe, founder and editor of the Robot Report, a highly regarded industry paper, said that he wanted a way to invest in the public market for robotics as a [...]

Continue Reading

Why Engaging in More Counterterrorism “Capture” Ops Makes Them Less Feasible Over Time

(Special note:  Lawfare, where I serve as His Serenity, Book Review Editor, is absolutely delighted VC’s own Orin Kerr has agreed to post there when the Spirit of National Security Law moves him.)

Over at Lawfare, I have a longish post about the declared US government policy of preferring capture operations over kill operations where “feasible.” This has been a constant refrain from senior US government officials for several years, including John Brennan (previously White House counterterrorism adviser and now CIA director) and President Obama in his May 23, 2013 speech at the National Defense University on counterterrorism (which Benjamin Wittes and I analyze closely in Chapter 3 of our e-book on the national security law speeches of the Obama administration, Speaking the Law, just now made available with open access at SSRN).  It is safe to say that these assertions have been widely seen among journalists and commentators as mere pieties, window dressing on a policy of kill over capture if only because the administration doesn’t have any place to hold new detainees.

So there was a flurry of commentary three weeks ago when US special operators, in conjunction with CIA, launched capture operations in Libya and Somalia.  Did this presage the beginning of a new era of special forces capture operations rather than drone strikes? Two days ago, on the other hand, the US launched a drone strike that killed someone it had been seeking for four years as the mastermind of a strike in Afghanistan against a CIA outpost that killed seven Americans, Hakimullah Mehsud, leader of the Pakistan Taliban.  What was “feasible” supposed to mean?  In practical terms, a kill operation differs from a capture operation in that the kill operation can be carried out by a drone, whereas a [...]

Continue Reading

We Robot 2014: Risks and Opportunities – Call for Submissions

For those of you working at the intersection of law, policy, and technology of robotics, We Robot 2014 is the conference for you.  Now going into its third year, it is the premier meeting on the interdisciplinary issues across law, society, and technology.  The 2014 conference will be held in Coral Gables, Florida, on April 4-5, 2014.  But the deadline for three-page proposals to present at the conference is coming up very fast – November 4, 2013.  The 2014 theme is “risks and opportunities”:

This conference will build on existing scholarship that explores how the increasing sophistication and autonomous decision-making capabilities of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues.

Scholarly Papers

Topics of interest for the scholarly paper portion of the conference include but are not limited to:

  • Risks and opportunities of robot deployment in the workplace, the home, and other contexts where robots and humans work side-by-side.
  • Issues related to software-only systems such as automated trading agents.
  • Regulatory and licensing issues raised by robots in the home, the office, in public spaces (e.g. roads), and in specialized environments such as hospitals.
  • Design of legal rules that will strike the right balance between encouraging innovation and safety, particularly in the context of autonomous robots.
  • Issues of legal or moral responsibility, e.g. relating to autonomous robots or robots capable of exhibiting emergent behavior.
  • Usage of robots in public safety and military contexts.
  • Privacy issues relating to data collection by robots, either built for that purpose or incidental to other tasks.
  • Intellectual property challenges relating to robotics as a nascent industry, to works or inventions created by robots, or otherwise peculiar to robotics.
  • Issues arising from automation of

[...]

Continue Reading

Banning Autonomous Weapon Systems Won’t Solve the Problems the Ban Campaign Thinks It Will

Although much less visible in the United States than in Europe, the campaign to ban “killer robots” has not gone away. If anything, it’s gathering steam in Europe and also at the UN, where it is likely to be taken up following a report by Special Rapporteur Christof Heyns calling, not precisely for a ban, but for a “moratorium.”  The International Coalition for Robot Arms Control (ICRAC) has released a letter signed by 270+ “computing scientists” calling for a “ban on the  development and deployment of weapon systems in which the decision to apply violent force is made autonomously.”

One can share the “computing scientists” overall concerns about humanity and accountability in war, however, without thinking that a sweeping, preemptory “ban” is the right way to approach these issues of emerging technology.  Over at The New Republic blog “Security States” (a joint project with the national security law website Lawfare), Matthew Waxman and I have a new post talking about these developments, and explaining why the ban approach to regulating the gradual automation of weapon systems is not likely to be effective, and moreover is deeply mistaken because, if somehow it did take hold, it gives up the potential gains from automation technologies in reducing the harms of war.  This post follows on a policy paper we did for the Hoover Institution a few months ago, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can – here is the opening (the piece, title notwithstanding, btw, is actually about weapons and war, not domestic drones):

What if armed drones were not just piloted remotely by humans in far-away bunkers, but they were programmed under certain circumstances to select and fire at some targets entirely on their own? This may sound like

[...]

Continue Reading

Thanks to Bryant Walker Smith for Guest-Posting Last Week on Self-Driving Cars

Thanks to Stanford CIS and CAR fellow Bryant Walker Smith for guest-posting here at Volokh Conspiracy last week on driverless car technologies.  You can read his posts on driverless carts as a sort of closed course for introducing driverless vehicles; the impact of automation technologies, including driverless cars, on transportation infrastructure and environmental planning issues; standards of “reasonableness” in assessing the safety and liability of self-driving cars; how to plan for a mix of technologies and varying degrees of advanced capabilities in a road system with increasing numbers of self-driving cars, including the possibility of planning for obsolescence; and a final post observing that, in the dialogue between engineering and law, it is law’s turn to speak and lay down some essential markers.

Thanks to shout-outs from readers who contacted me directly – mostly to tell me that it is helpful to hear from someone with expertise and the willingness to be cautious about the technological directions, both as a matter of predictions as well as normative judgments about the right or wrong way to approach this new technological future.  While some commenters were a bit frustrated at Bryant’s caution in making sweeping or categorical responses or engaging the many widely discussed dilemmas that these new technologies might raise, in favor of a much more careful and cautious approach to the many unsettled questions of design and regulation, this caution is what I most often hear expressed by people directly engaged in addressing these questions as policy.

Bryant’s posts also located driverless car technologies in the larger framework of transportation infrastructure; the road system, its use by emerging technologies and impacts on it, as well as other transportation infrastructure such as mass transportation.  Even if one believes the right policy is simply to let the technological path [...]

Continue Reading

Looking at My Vehicle Automation Entries in the Rear-View Mirror

Thank you for reading my posts this week. If you happen to be Eugene Volokh or Ken Anderson, thank you in particular for making them possible. And if you were one of my thoughtful commenters, thank you for questioning and challenging; I have read your remarks with great interest.

My goal in these posts was to raise a set of legally relevant issues that have yet to receive sufficient attention in public and academic discussions of vehicle automation: automated shuttles, infrastructure planning, process-based regulation, and product obsolescence.

Critically, these are also issues that matter to the present: Automated shuttles are already available, at least one environmental impact statement has already analyzed automated truck platooning, California’s Department of Motor Vehicles is currently drafting rules on self-driving vehicles, and cars with what are considered to be “advanced” driver assistance systems are on the market today. Responsible deployment of automation technologies requires a dialogue between law and engineering, and on these particular issues it is the law’s turn to speak.

Many other technical, legal, and policy issues will also be (gradually and imperfectly) resolved through this iterative process. The wholesale reinvention of our tort system, for example, is probably not necessary for self-driving cars and trucks to (eventually) reach the market. At the same time, however, incrementalism may obscure the evolution of some values, like citizen and consumer privacy, that merit more public attention.

One of the recurrent themes in my posts was the potential for greater centralization: The deployment of centrally managed shuttle fleets, the development of process-based rules that may benefit larger companies, and the continuation of manufacturer control through over-the-air updates could all tend to consolidate rather than disperse power. I understand that technical and political dangers are inherent in this systematic approach, which I shared [...]

Continue Reading 0

Planning for the Obsolescence of Technologies Not Yet Invented

The automated motor vehicles that I have discussed this week are just one example of the remarkable technologies coming to our roads, skies, homes, and even bodies. A decade from now, we’ll marvel at how advanced these new products are. But a decade after that, we’ll marvel at how anachronistic they have become.

Rapid technological change means that obsolescence is inevitable, and planning for it is as much a safety strategy as a business strategy. Responsible developers and regulators will need to consider the full lifecycle of products long before those products ever reach the market.

Cars of the early 20th Century (JSTOR) were essentially beta products. In 1901, Horseless Age magazine noted that “[i]f a manufacturer finds that the axles of his machine are” breaking, then “the next lot of vehicles are provided with axles of a slightly larger diameter and so on until they begin to stand up pretty well.” In 1910, a GM engineer testifying in MacPherson v. Buick Motor Co. explained that “the only means” for a designer to get information about a vehicle’s performance “is to use the customers, that is to go over the complaint correspondence.”

As I noted yesterday, it is at least conceivable that a similar approach to modern design could counterintuitively end up saving lives by accelerating safety-critical innovation. But even a more cautious approach to product design and deployment is necessarily iterative.

The general bent of incremental innovation is toward greater safety. The electronic stability control now required in new cars, for example, could save thousands of lives a year if deployed fleetwide. But given the slow turnover in cars–the average age of today’s fleet exceeds ten years–reaching saturation could take years.

At the same time, new products can present new dangers. Most of these dangers are [...]

Continue Reading 0

The Reasonable Self-Driving Car

A common debate in many circles–including the comments on my posts here–is whether legal burdens, technical limitations, or consumer preferences present the greatest immediate obstacle to fully automated motor vehicles. (Fully automated vehicles are capable of driving themselves anywhere a human can. In contrast, the low-speed shuttles from my post on Monday are route-restricted, and the research vehicles that regularly appear in the news are both route-restricted and carefully monitored by safety drivers.)

An entirely correct response is that the technologies necessary for full automation are simply not ready. Engineering challenges will be overcome eventually, but at this point they are varied and very real. If they were not, we would already see fully self-driving cars operating somewhere in our diverse world–in Shanghai or Singapore, Abu Dhabi or Auckland.

The deeper issue, which manifests itself in law, engineering, and economics, is our (imperfect and inconsistent) societal view of what is reasonably safe, because it is this view that determines when a technology is ready in a meaningful sense. Responsible engineers will not approve, responsible companies will not market, responsible regulators will not tolerate, and responsible consumers will not operate vehicles they believe could pose an unreasonable risk to safety.

How safe is safe enough? One answer, that self-driving cars must perform better than human drivers on average, accepts some deaths and injuries that a human could have avoided. Another answer, that self-driving cars must perform at least as well as a perfect human driver for every individual driving maneuver, rejects technologies that, while not perfect, could nonetheless reduce total deaths and injuries. A third answer, that self-driving cars must perform at least as well as corresponding human-vehicle systems, could lock humans into monitoring their machines–a task at which even highly trained airline pilots can occasionally fail due to understimulation [...]

Continue Reading 0

The Impact of Automation on Environmental Impact Statements

Since the 1950s, the Long Beach Freeway has linked the massive Ports of Long Beach and Los Angeles to, roughly, the rest of the continental United States. Because much has changed in trade and traffic since then, California’s relevant transportation authorities have decided that perhaps this freeway should change as well.

The resulting Draft Environmental Impact Statement (EIS), released in 2012, includes several project alternatives that feature a dedicated four-lane freight corridor for the many trucks that service the ports. In two of these alternatives, all of the trucks on the corridor are assumed to have automated steering, braking, and acceleration that enables them to travel in closely spaced platoons of six to eight vehicles. Smoother flows and lower headways mean higher vehicular capacity.

Automation–or at least automation-related litigation–is coming to an EIS near you.

For a transportation project, automation may be relevant to many of the project alternatives, including the no-build. Potential highway expansions typically use a planning horizon of at least twenty years, and yet several automakers now forecast that they will market vehicles with some kind of advanced automation within a decade. (To put this in slightly more skeptical terms, the self-driving cars that have been twenty years away since the 1930s are now just ten years away.)

As I have argued, the ongoing automation of our transportation system could change land use patterns, increase both travel demand and roadway vehicular capacity, and improve the vehicular level of service at capacity. This means that some of the basic assumptions upon which an EIS’s alternatives analysis is based, like a freeway lane’s theoretical capacity of 2400 vehicles per hour, may be outdated by the time a project alternative is implemented.

In addition, as with the Long Beach Freeway analysis, particular alternatives may involve the automation of vehicles [...]

Continue Reading 0

Driverless Carts Are Coming Sooner Than Driverless Cars

I’m delighted to be spending this week committing overt acts in furtherance of the Volokh Conspiracy. Since joining Stanford in 2011, I’ve been studying the increasing automation, connectivity, and capability that promise to dramatically change our lives, institutions, and laws. My posts this week will focus on one key example: self-driving vehicles (or whatever you want to call them). The timing is fortuitous, since any remaining legal or technical issues that we fail to collectively solve in the comments section of this blog can be remedied at next week’s U.S. House hearing on “How Autonomous Vehicles Will Shape the Future of Surface Transportation.”

A number of other government bodies are already shaping the legal future of autonomous driving. Nevada, Florida, California, and the District of Columbia have enacted laws expressly regulating these vehicles, California’s Department of Motor Vehicles is currently developing more detailed rules, and a number of other states have considered bills. The U.S. National Highway Traffic Safety Administration (NHTSA) released a preliminary policy statement earlier this year, and Germany, Japan, the United Kingdom, and the European Union have also taken initial domestic steps. Meanwhile, parties to the 1949 Geneva and 1968 Vienna Conventions on Road Traffic are discussing how to reconcile language in these treaties with advanced driver assistance systems.

These efforts tend to view vehicle automation as an incremental process in which driving functions are gradually shifted from human drivers to automated driving systems. The taxonomies developed by the German Federal Highway Research Institute, NHTSA, and SAE International’s On-Road Automated Vehicle Standards Committee (on which I serve) are consistent with this view, even if they are not yet entirely consistent with each other. Yesterday’s cars have antilock brakes and electronic stability control, today’s cars are getting adaptive [...]

Continue Reading 0

Bryant Walker Smith Guest-Blogging This Week About Self-Driving Cars, Automation Technologies, and Their Regulation

Automation and robotic technologies have popped up in Volokh Conspiracy posts several times during the last few years – drone aircraft, autonomous or highly automated weapons, nursing and eldercare assistance machines and, of course, self-driving cars.  So I’m pleased to announce that Bryant Walker Smith, a leading expert on automation and the law, will be guest-blogging this week here at Volokh Conspiracy – on self-driving cars, and automation technologies and their regulation more broadly.

Bryant is a fellow at both Stanford Law School’s Center on Internet and Society (CIS) and Stanford’s Center for Automative Research (CARS). I first met him at a Stanford conference where he presented a CIS report giving the only genuinely comprehensive analysis of the whether a self-driving car would be legal under the law of each of the 50 states, the federal government, and the Geneva Convention you have never heard of – on driving automobiles.  He trained and worked as a civil engineer before studying law, and his academic writing focuses on torts, technology, legislation and regulation, as well as international economic and environmental law.

Apart from the CIS report, Bryant has also written a number of straightforwardly academic law review articles (he is on the law teaching job market this year, and is a lecturer at SLS, where he teaches a class on self-driving vehicles and the law).  Particularly interesting to me (in part because it is counterintuitive to some understandings of automation technologies and traffic management) is “Managing Autonomous Transportation Demand” - it suggests that genuinely successful automation might increase demand for driving and hence put greater, not lesser, pressure on road systems and traffic management; it applies a set of engineering concepts to make recommendations about how such demand, if it were to materialize in this way, might be managed efficiently.

Bryant will [...]

Continue Reading

Domestic Drone Regulation for Safety and Privacy

Today’s (Sunday, Sep 8, 2013) New York Times has a story by Anne Eisenberg, “Preflight Turbulence for Commercial Drones.”  The article combines two crucial topics in connection with drones (remotely piloted aerial vehicles, or unmanned aerial vehicles, UAVs, but my advice to the industry and USAF is that the People Have Spoken, and it’s “drones”): safety and privacy.  The article is interesting chiefly because it focuses on commercial drones (rather than either military drones, law enforcement drones, or hobbyist drones, as so many articles do).  It talks about the likely path of commercial uses of drones:

Companies in the United States are preparing for drones, too. Customers can buy an entire system, consisting of the aerial vehicle, software and a control station, for less than $100,000, with smaller systems going for $15,000 to $50,000, said Jeff Lovin, a senior vice president at Woolpert, a mapping and design firm in Dayton, Ohio. Woolpert owns six traditional, piloted twin-engine aircraft to collect data for aerial mapping; these typically cost $2 million to $3 million to buy, and several thousand dollars an hour to operate, he said.

Gavin Schrock, a professional surveyor and associate editor of Professional Surveyor magazine, says he thinks that surveyors will be among the first to add drones to their tool kits. Aerial systems are perfect for surveying locations like open-pit mines, he said. A small drone can fly over a pit, shuttling back and forth in overlapping rows, taking pictures that can be stitched together and converted into a three-dimensional model that is accurate to within a few inches. Such a system is safer than having a surveyor walk around the pit with traditional tools. “I hate doing that,” Mr. Schrock said. “It’s dangerous.”

For many commercial applications, in other words, the choice will become [...]

Continue Reading

The Plight of Star Wars’ Droids

Erik Sofge has an interesting Slate article about the oppression to which droids are subjected in the Stars Wars universe:

George Lucas doesn’t care about metal people. No other explanation makes sense. In a kid-targeted sci-fi setting that’s notably inclusive, with as many friendly alien characters as villainous ones, the human rights situation for robots is horrifying. They’re imbued with distinctly human traits—including fear—only to be tortured and killed for our amusement. They scream while being branded, and cower before heroes during executions….

When we meet C-3PO—in the original, 1977 Star Wars—he’s a nuisance. He’s a coward aboard Princess Leia’s besieged spaceship, and, after being sold to Luke Skywalker’s uncle (as part of a package deal, with the invaluable R2-D2), he spends nearly every moment aghast or needling at his braver companions. But C-3PO’s grating state of constant terror isn’t unwarranted. When Luke discovers that R2-D2 has left his post to look for Obi-Wan, the protocol droid practically swoons. “It wasn’t my fault, sir,” he wails, “please don’t deactivate me!”

It’s a throwaway line, part of C-3PO’s responsibilities as resident comic foil. But the implications aren’t so easily dismissed. As the movies progress, we see further evidence that droids experience fear, joy, and misery (even the redoubtable R2 is prone to the occasional whimper-whistle). And yet, they’re bought and sold like property. They are property, with C-3PO passed from owner to owner, his consciousness shut down temporarily when his nattering is too much to bear, or permanently rearranged without a moment’s hesitation or apology. C-3PO isn’t (simply) craven, when he quails before his new master. C-3PO knows the score. They deactivate droids, don’t they?

As Sofge suggests, the interesting thing about the role of droids in the Star Wars universe is not that they are an oppressed class, but [...]

Continue Reading 0

Obama’s Speech on Drones and the War on Terror

In his speech on drones and the War on Terror today President Obama made many valid points. But he also continued to elide some key issues. On the plus side, Obama correctly emphasized that the use of drones against terrorists is not inherently illegal nor immoral, that drones are often more discriminating and less likely to inflict civilian casualties than other military tactics, and that US citizens can be legitimate targets when they become enemy combatants.

Unfortunately, Obama also continued to dance around the more problematic aspects of his drone policy: who decides whether a particular individual being considered as a potential target is really a member of Al Qaeda and how much evidence is needed to back up such a determination? I emphasized these issues in my recent Senate Judiciary Subcommittee testimony on drones and here. Here are the most relevant parts of Obama’s speech on these questions:

In the Afghan war theater, we must — and will — continue to support our troops until the transition is complete at the end of 2014. And that means we will continue to take strikes against high value al Qaeda targets, but also against forces that are massing to support attacks on coalition forces. But by the end of 2014, we will no longer have the same need for force protection, and the progress we’ve made against core al Qaeda will reduce the need for unmanned strikes.

Beyond the Afghan theater, we only target al Qaeda and its associated forces. And even then, the use of drones is heavily constrained. America does not take strikes when we have the ability to capture individual terrorists; our preference is always to detain, interrogate, and prosecute. America cannot take strikes wherever we choose; our actions are bound by consultations with partners, and respect

[...]

Continue Reading 0

The Case for Drones

Just in time for President Obama’s big speech Thursday at the National Defense University on counterterrorism policy and strategy, Commentary Magazine has made available early my June cover article, “The Case for Drones.”  (Available free and not behind the subscriber wall.)  It’s a long essay arguing that drones are both effective and ethical, and addressing a number of the objections to each of those propositions.

The article has a particular audience in mind. It is aimed at conservatives and Republican members of Congress especially, to remind them that their sometimes knee-jerk attacks on the “imperial” Obama presidency risks one major piece of national security that the Obama administration has got well and truly right.  There’s no lack of imperial presidency, abuse of power material for conservatives to work with- pick your issue this week – but this particular issue is one where, if conservatives look down the road, they ought to see that any president, Republican or Democrat, will need to have available the national security tools of drone warfare and national security.  It would be a remarkably foolish thing if, by inattention or inappropriate and merely reflexive attacks on the Obama administration’s drone policy, Republicans in Congress wound up permitting drone warfare to be made politically, morally, or legally illegitimate – just as a future Republican president enters office and discovers that, yes, there are terrorist threats best addressed by drones.  Congressional Republicans, in the midst of the many abuse of power hearings, ought nonetheless to be scheduling hearings to invite current and former administration officials to reiterate their legal views on drone warfare, with the express purpose of standing with the President on this tool of national security and its permanent, legal, and legitimate place.

Commentary is a conservative magazine, obviously, and I’m writing there as a [...]

Continue Reading