Archive | Robotics

Bill to Be Introduced to Increase Armed Services Committees’ Oversight Over Special Operations

Rep. Mac Thornberry (R-Texas), member of the US House of Representatives Armed Services Committee, plans to introduce a bill that would increase Congressional oversight over kill-capture operations conducted outside of Afghanistan by the US military.  University of Texas law professor Robert Chesney discusses the proposed legislation over at Lawfare, and gives a section by section commentary.  Whether this is an important step or not depends on one’s starting point, of course; I agree with Chesney that it is a big deal and a welcome step to regularizing . (Though if one’s view is that all these operations are unlawful, or that  they require judicial oversight, or something else, whether from the Left of the Democratic Party or what we might call the Pauline wing of the Republican Party, then you won’t be much moved.)

Seen within the framework of US law and oversight of overseas use of force operations, this is an important step.  A couple of observations.  First, this (soon-to-be) proposed legislation is with respect to operations conducted by the US military under US Code Title 10; it does not cover CIA activities, which are already subject to oversight and reporting under US Code Title 50.  Second, it covers US military operations with respect to the lines of oversight running back to the Armed Services committees; essentially it increases the role of the Armed Services committees in oversight of US military operations in what it defines as “Sensitive Military Operations” – which in practice means clandestine Joint Special Operations Command (JSOC) activities.  It does not alter the existing oversight processes of Congressional intelligence committees governing covert action as defined in US Code Title 50, but extends and increases oversight over military operations.  Why this focus on military operations conducted by JSOC?

Counterintuitive as many might find it, [...]

Continue Reading

John Villasenor on Domestic Drones, Airspace Safety, and Privacy Protection

John Villasenor – a professor of engineering at UCLA and a Brookings Institution senior fellow – has a new article at Slate on the domestic use of drones.  (The article part of a conference held yesterday at the New America Foundation in conjunction with Arizona State University on domestic drone policy, with many fine participants; well worth checking out.)  The article’s fundamental point is that many features that will likely figure in FAA regulations intended to ensure safety in domestic airspace as drones are allowed to enter it will also be supportive of privacy concerns.  By no means does this make the problems of privacy go away, but it’s important to be aware of the ways in which safety regulation will affect and, in important ways, reinforce privacy.

For most of the 20th century, obtaining overhead images was difficult and expensive. Now, thanks to advances in unmanned aircraft systems—people in the aviation field tend to dislike the word drone—it has become easy and inexpensive, raising new and important privacy issues[PDF]. These issues need to be addressed primarily through legal frameworks: The Constitution, existing and new federal and state laws, and legal precedents regarding invasion of privacy will all play key roles in determining the bounds of acceptable information-gathering from UAS. But safety regulations will have an important and less widely appreciated secondary privacy role.

Why? Because safety regulations, which aim to ensure that aircraft do not pose a danger in the airspace or to people and property on the ground, obviously place restrictions on where and in what manner aircraft can be operated. Those same restrictions can also affect privacy from overhead observations from both government and nongovernment UAS. FAA regulations make it unlawful, for example, to operate any aircraft (whether manned or unmanned) “in

[...]

Continue Reading

Self-Driving Vehicles – How Soon and Who Will Bear the Liability Costs?

Self-driving cars are receiving a lot of attention these days – partly as the technologies that make them possible advance and partly because, well, we the public are more aware of them and are realizing there is quite a lot to discuss regarding their regulation and use.  As the technologies that appear to be making self-driving cars possible advance from the science fiction to the hypothetical to the possible to the likely, technological paths become sufficiently determinate that it makes sense to be talking about the social, legal, and regulatory structures for their use.

Indeed, we are probably a little late in holding these discussions, because knowledge of the social and regulatory conditions can, and does, have an influence on the technological designs, and so generally, the earlier the better.  A new and quite interesting debate at the Economist asks the question, whether and how soon these cars will be ready for market (it’s not a debate over whether they are desirable, but instead whether they will be feasible in the foreseeable future). It’s striking that the pro-side (holding that they will be, and sooner rather than later) essentially rests on technological feasibility, while the con side rests partly on skepticism about the technologies but very considerably on whether the social, economic, legal and regulatory hurdles will have been overcome.

Self-driving cars are special for a couple of reasons.  One is that they will (and already do) consist of a bundle of technologies – in one sense conceived in the usual robotics formulation of sensors, computation, and physical movement.  But in the case of cars, it’s better understood as automation of the distinct systems of a car: acceleration, braking, steering, etc.  These are being automated in separate systems, and combined together in the computer control of the total vehicle.

A [...]

Continue Reading

The Debate About to Heat Up Over HRW’s Call to Ban “Killer Robots,” AKA Autonomous Weapon Systems

(And: thanks to Instapundit for linking to the new policy essay by Matthew Waxman and me from the Hoover Institution, referenced at the end of this post, Law and Ethics for Autonomous Weapon Systems – thanks Glenn!)

Last November, two documents appeared within a few days of each other, each addressing the emerging legal and policy issues of autonomous weapon systems – and taking strongly incompatible, indeed opposite, approaches.  One was from Human Rights Watch, whose report, Losing Our Humanity: The Case Against Killer Robots, made a sweeping, preemptive, provocative call for an international treaty ban on the use, production, and development of what it defined as “fully autonomous weapons” and dubbed “Killer Robots.”  Human Rights Watch has followed that up with a public campaign for signatures on a petition supporting a ban, as well as a number of publicity initiatives that (I think I can say pretty neutrally) seem as much drawn from sci-fi and pop culture as anything.  It plans to launch this global campaign at an event at the House of Commons in London later in April.

The other was the Department of Defense Directive, “Autonomy in Weapon Systems” (3000.09, November 21, 2012).  The Directive establishes DOD policy and “assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems … [and] establishes guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems.”

By contrast to the sweeping, preemptive treaty ban approach embraced by HRW, the DOD Directive calls for a review and regulatory process – in part an administrative expansion of the existing legal weapons review process within DOD, but reaching back to the very beginning of the research and development process.  In part it aims to ensure that whatever level of autonomy a weapon [...]

Continue Reading

US Surveillance Drones Aid French Airstrikes in Mali

The Wall Street Journal national security reporting team has a new article in today’s Journal on how US surveillance drones are providing intelligence and targeting information to French forces in Mali, which then use the information to direct French (manned) airstrikes.  The drone surveillance marks, according to the article, a widened role for the US in support of French military operations in Mali:

U.S. Reaper drones have provided intelligence and targeting information that have led to nearly 60 French airstrikes in the past week alone in a range of mountains the size of Britain, where Western intelligence agencies believe militant leaders are hiding, say French officials.

The operations target top militants, including Mokhtar Belmokhtar, the mastermind of January’s hostage raid on an Algerian natural gas plant that claimed the lives of at least 38 employees, including three Americans. Chad forces said they killed him on Saturday, a day after saying they had killed Abdelhamid Abou Zeid, the commander of al Qaeda in the Islamic Maghreb’s Mali wing.

French, U.S. and Malian officials have not confirmed the deaths of Mr. Belmokhtar or Mr. Zeid, citing a lack of definitive information from the field. But they say the new arrangement with the U.S. has led in recent days to a raised tempo in strikes against al Qaeda-linked groups and their allies some time after the offensive began in January. That is a shift for the U.S., which initially limited intelligence sharing that could pinpoint targets for French strikes.

The lack of French drone capacity, for surveillance or attack, was noted in a New York Times article two weeks ago that profiled the French Defense Minister, Jean-Yves Le Drian.  Le Drian was blunt about the need for and the lack of drones (emphasis added below):

[W]hile the French express hope that African forces

[...]

Continue Reading

A Gradual Shift in Human Attitudes Toward Emotional Interaction with Robots?

Sherry Turkle is an MIT professor who studies human- robot psychological and social interactions.  She has been documenting and studying the attitudes of humans toward having emotional relationships and affective interactions with robots over time, and notes a gradual shift toward seeing such interactions favorably.  She recently presented at the annual American Association for the Advancement of Science meetings; it was covered by LiveScience (Clara Moskowitz, Human Robot Relations: Why We Should Worry, LifeScience 18 February 2013, HT Insta).  LiveScience is a popularizer of science, of course, and Turkle’s academic research is sober and restrained, and much more sophisticated than a general interest site can easily convey, but the article captures well some important points.  First, attitudes are in fact shifting in the United States:

Turkle studies people’s thoughts and feelings about robots, and has found a culture shift over time. Where subjects in her studies used to say, in the 1980s and ’90s, that love and friendship are connections that can occur only between humans, people now often say robots could fill these roles …

Turkle interviewed a teenage boy in 1983, asking him whom he would turn to, to talk about dating problems. The boy said he would talk to his dad, but wouldn’t consider talking to a robot, because machines could never truly understand human relationships.  In 2008, Turkle interviewed another boy of the same age, from the same neighborhood as the first. This time, the boy said he would prefer to talk to a robot, which could be programmed with a large database of knowledge about relationship patterns, rather than talk to his dad, who might give bad advice.

Turkle is particularly well-known within the specialist community, however, for her concern that increasingly positive feelings toward machines as companions and replacements for human interaction is [...]

Continue Reading

Arming a Hobbyist Drone with a Paintball Gun

In a Hoover Institution essay a few weeks ago, the Brookings Institution’s Benjamin Wittes asked, “How long do we really think it will take before a gun enthusiast arms a remotely-piloted robotic aircraft with his favorite handgun (very doable by a competent layperson with a few thousand dollars to burn)?” He points at Lawfare today to a new YouTube video of a hobbyist who has mounted a paintball gun on a hobbyist drone.  The paintball gun is impressively accurate, all things considered.  I leave to Dave Kopel and other gun law experts here the legal ins and outs of whether an actual handgun mounted on a drone; my uninformed assumption is that it is illegal, indeed criminal, now; the YouTube video says repeatedly that a real weapon would illegal. I’m  not a legal expert in this area (on Gun Appreciation Day, following Dave Kopel’s suggestion to consider supporting Second Amendment groups, I re-joined the NRA after several years of lapse from sheer inattention, but I don’t follow this area save international law issues such as the proposed arms treaty).  However, I learned of this video from former Deputy Attorney General Jim Comey, at a conference that looked at what it called the gradual proliferation of “many-to-many threats,” including cyber, bio-weaponization, and certain aspects of robotics and autonomous robotic systems.  “If this is what a novice with a small budget can accomplish,” the voiceover narrator says with understated ambiguity, “then clearly, this technology has a lot of potential.” Actually,from the standpoint of the individual gun-owner whose interest is self-defense, my guess is that this technology is pretty limited in its application, unless there were a considerable amount of automation introduced into the technology. It might be useful to home defense, I suppose, to send a drone rather than sending yourself, but [...]

Continue Reading

The Uncanny Valley – The Original Human-Robot Psychology Essay by Masahiro Mori

VC readers, being eclectic polymaths, are likely to heard of the “Uncanny Valley” – the hypothesis advanced by roboticist Masahiro Mori that a “person’s response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance. This descent into eeriness is known as the uncanny valley.” Mori’s article appeared more than 40 years in an obscure journal in Japan called Energy, but was never widely available in complete form in English.  Last year, Automaton, IEEE/Spectrum’s robotics blog, published a complete translation of the article.  I had never read it in full, and I thought it might interest VC readers.  The notion of the Uncanny Valley has taken on greater importance as robots are gradually being developed that are intended to have greater human-machine interaction.  And the article is important in its own right as part of the intellectual history of science and technology.  Here is the editor’s introduction, from which the above quote is taken:

More than 40 years ago, Masahiro Mori, then a robotics professor at the Tokyo Institute of Technology, wrote an essay on how he envisioned people’s reactions to robots that looked and acted almost human. In particular, he hypothesized that a person’s response to a humanlike robot would abruptly shift from empathy to revulsion as it approached, but failed to attain, a lifelike appearance. This descent into eeriness is known as the uncanny valley. The essay appeared in an obscure Japanese journal calledEnergy in 1970, and in subsequent years it received almost no attention. More recently, however, the concept of the uncanny valley has rapidly attracted interest in robotics and other scientific circles as well as in popular culture. Some researchers have explored its implications for human-robot interaction and computer-graphics animation, while others have investigated its biological and social roots.

[...]

Continue Reading

Law and Robots Conference Call for Papers, and a Link to a Video From HRW’s Tom Malinowski Which, Though I am Not Persuaded, Will Always Treasure

The “Law and Robotics Conference” will take place on April 8-9, 2013, at Stanford Law School (it follows on the highly successful law and robotics conference that took place at University of Miami last year).  Conference organizers are seeking proposals to present conference papers – I should have posted this a while ago – and paper proposals are due by this Friday, January 18.  Matthew Waxman and I plan to submit a proposal on comparing self-driving cars and autonomous weapon systems (I’ve been exploring some of these ideas, brainstorming for the paper, here at Volokh), and I am 100% certain the conference will be terrific with outstanding papers and great discussions.  Here is the link if you’re interested.

Meanwhile, over at Lawfare, Human Rights Watch’s Tom Malinowski, Benjamin Wittes, Matthew Waxman, and I have been debating the recent HRW report calling for a ban on “Killer Robots.”  Tom’s latest response – though mostly a serious discussion, well worth reading though it does not succeed in persuading me – has a video at the end that I will always, always fondly treasure.  It’s great.   (It’s in Hindi, and though I didn’t know Tom knew Hindi, I’m going to trust his subtitles.) [...]

Continue Reading

An “Ethical Turing Test”? More on Comparing Self-Driving Vehicles and Autonomous Weapon Systems

In my earlier posts comparing self-driving cars and autonomous weapon systems, I pointed out that in neither case are we seeing a sudden, systemic paradigm change, a shift from one whole technological system to another.  Not, at least, in the sense that I had long assumed – that a change-over to driverless cars would require necessarily a systemic change from individuals driving their individual cars to a centralized computer system dealing with all the vehicles in the system as a whole system – including things like sensors in the roads, no commingling of system-controlled cars with individually-controlled cars, etc.

Instead, the changes in these particular technologies are occurring incrementally.  It might be different for other technologies, but for these, it so happens that the changes are taking place bit by bit.  Cars are being sold one-by-one that are gradually incorporating more and more of these automatic systems, as safety and convenience features.  This fact alters the nature of the legal, ethical, and policy review that has to be made of the systems – regulatory review, too, has to be incremental.  Moreover, changes toward automation often occur in highly discrete technological functions within the larger activity – braking systems in cars, or the detailed and particular criteria used for target identification in weapons, for example.  Legal, ethical, and policy decisions have to address both the particular function and its impact on the overall machine system.  In this regard, I once again highly recommend the new report by Bryant Walker Smith (Stanford Center for Internet and Society), on the legality of self-driving cars in the US. For those of us interested in weapon systems, it provides a useful basis for comparing ways in which vehicle codes will have to gradually take account of evolutionary technologies with what the legal review for automating [...]

Continue Reading

(Updated) The Incremental Progress of Self-Driving Cars and Current Safety Systems

I’m continuing my series of posts on automated vehicles (the last one was some initial thoughts on comparisons between self-driving cars and autonomous weapon systems).  Today I want to recommend this January 12, 2013 NYT story, by John Markoff and Somini Sengupta, on the current state of safety systems for cars in the incremental advance toward fully automated and finally self-driving vehicles.  Plus, in order to understand the regulatory and legal context in which this transformation necessarily takes place, I also highly recommend the new report by Bryant Walker Smith (Stanford Center for Internet and Society), on the legality of self-driving cars in the US. It makes a useful basis for comparing the ways in which vehicle codes will have to gradually take account of evolutionary technologies.

New York State, for example, requires in its vehicle code that drivers have one hand on the steering wheel at all times; that obviously won’t be compatible with the emergence of self-driving cars. Even Nevada (a state that has positioned itself ahead of the curve by adopting a self-driving car provision) requires that the car have a human driver who is responsible and able to take over driving.  Texting while the car drives itself is okay, in other words, but getting into the vehicle drunk and telling it to drive you home is not, because you would not be able to drive if necessary.  Yet technology will presumably alter that, and the vehicle code will presumably adapt as the technology improves, given that a core purpose of self-driving vehicles is to drive people who are incapacitated, by alcohol, but more importantly by age.  After all, Google is betting its self-driving cars on a market among elderly baby boomers who can’t (or shouldn’t) be driving.

Which goes to illustrate that a key focus and market [...]

Continue Reading

Legal Infrastructure for Driverless Cars, and Comparisons Between the Law and Ethics of Self-Driving Cars and Autonomous Weapon Systems

(Thanks Instapundit for the link.)  Driverless cars are coming faster than most observers would have thought.  One big reason, according to Bryant Walker Smith in a recent article in Slate, is that people predicting the driverless car future assumed that they would have to be part of centrally-run systems, with corresponding changes to physical infrastructure, such as special roads embedded with magnets.  Or for that matter, we can add, centralized computers to take control of all the vehicles in the system.  The changeover has to be centralized and take place for a given area all at once; it doesn’t scale incrementally.  That was the thought, anyway, and Smith (who is a fellow at Stanford’s Center for the Internet and Society) says that as a consequence, ever “since the 1930s, self-driving cars have been just 20 years away.”

II
Self-Driving Cars

Today’s self-driving systems, however, are “intended to work with existing technologies.”  They use sensors and computers to act as individual vehicles responding to the environment around them individually, without having to be a cog in the larger machine.  This means that they can adapt to the existing infrastructures rather than requiring that they all be replaced as a whole system.  Smith’s real point, however, is to go on from physical infrastructure to include the rules of the road.  Infrastructure also includes, he says,

laws that govern motor vehicles: driver licensing requirements, rules of the road, and principles of product liability, to name but a few. One major question remains, though. Will tomorrow’s cars and trucks have to adapt to today’s legal infrastructure, or will that infrastructure adapt to them?

Smith takes up the most basic of these questions – are self-driving vehicles legal in the US?  They probably can be, he says – and he should know, as [...]

Continue Reading

How’s the FAA Coming on Integrating Domestic Drones into US Airspace?

Though much of my attention goes to military drones used outside the US, the regulatory issues surrounding US domestic drones are vital and difficult.  Domestic drones of many types, sizes, and functions will become an increasingly important part of US aviation over time, and the issues are thorny.  Lawyer types like me tend to focus on privacy issues, or government surveillance, and such issues – but there are big questions long before one gets to those about how even to integrate drones into the existing domestic airspace.  The FAA has been tasked by Congress to get on with resolving the whole range of domestic drone issues.  Wells Bennett – a lawyer who is the Lawfare blog’s Special Correspondent and a Visiting Fellow in National Security Law at the Brookings Institution – has a new paper up at the Brookings website (link here at Lawfare) on how it’s coming along.  It covers:

  • (1) key benchmarks set by the FAA Modernization and Authorization Act, the statute behind the integration process;
  • (2) the agency’s progress to date in meeting those benchmarks; and
  • (3) core policy issues that must be addressed before late 2015—the so-called “deadline” for integration of privately–as well as government-operated drones.

  [...]

Continue Reading

Robo-Doctors? Or Robo-Nurses?

Wired (H/T Instapundit) has a nice article by Daniela Hernandez on the coming of “robo-doctors.” Not  yet quite what our sci-fi imaginations desire, but still an important development on its own terms:

Charlie Huiner, the vice president of InTouch Health care … sees robo-docs rising … His company is developing robots that allow doctors to “provide their care and expertise” remotely, he said at the second day of the Wired Health Conference.  Huiner doesn’t call his robots replacement doctors. He calls them conduits of care. The robot’s patented autonomous capabilities let a flesh-and-blood doctor on the other side tell their android helper what to do. “It is as easy as tapping a point on a map or a patient room [on an iPad] and the robot will go there,” Huiner told Wired.

He showcased the company’s new ‘bot, RP-VITA, or Remote Presence Virtual + Independent Telemedicine Assistant, at Wired Health with company CEO Yulun Wang teleconferencing in from another location. “It’s like the movie Avatar, but for medical applications,” Wang said, appearing on the robot’s monitor-head, which has two eye socket-like cavities. (They’re a user-controlled, eye-friendly laser pointer.) The humanoid ‘bot, which InTouch developed with Roomba-maker iRobot, also can interface with third-party apps.

This technology in effect combines remote-conduits (doctor at different location; machine with patient); locomotion (tell machine where to go and it goes there); robotic capabilities and sensors to do things like take temperature (capabilities that will presumably get more sophisticated over time); and AI capacities for assisting the remotely-located doctor to do diagnosis.

There are important roles for automation, AI, and robotics with regard to the doctor’s role in “seeing” a patient – diagnostic computers will likely become increasingly important adjuncts for the doctor.  But my guess is that most of the genuinely “robotic” activities in health [...]

Continue Reading 0

Service Robots in the Hospital, and the Upcoming ‘We Robot’ Conference at University of Miami Law School

The Wall Street Journal has an article in its Thursday, March 15, 2012 edition titled “The Robots Are Coming to Hospitals.”  Reporter Timothy Hay explores ways in which robots are being deployed to transport linens, laundry, and other things around hospital complexes – which are, of course, often enormous facilities.  (I believe this is an open link.)  I have remarked several times here at Volokh that robotics has a natural place in health care – and, as I’ve said, even more so in nursing than in the operating theater.  (I regard it as the new “plastics.”)  The Journal article seems to agree:

In the next few years, thousands of “service robots” are expected to enter the health-care sector—picture R2D2 from “Star Wars” carrying a tray of medications or a load of laundry down hospital corridors. Fewer than 1,000 of these blue-collar robots currently roam about hospitals, but those numbers are expected to grow quickly. As America’s elderly population grows, the country’s health-care system is facing cost pressures and a shortage of doctors and nurses. Many administrators are hoping to foist some of the less glamorous work onto robots.

This could create a potential bonanza for software and application developers to write new programs for them, investors and industry watchers say. “My guess is that in five years, there will be 10 times the number of robots deployed in hospitals that there are today,” said Donald Jones, a managing director at Draper Triangle Ventures, who is backing privately held robotics company Aethon Inc. “We are just not going to have enough human hands to do all the work.”

These technologies will piggy-back off of many existing and emerging technologies; some of their first and most important roles will be adaptations of technologies developed for warehouse fulfillment centers such as those used by [...]

Continue Reading 0