Hardware Dev Military

SHOULD A ROBOT DECIDE WHEN TO KILL?

As reported on The Verge.

By Adrianne Jeffries

The ethics of war machines

darpa_1020

By the time the sun rose on Friday, December 19th, the Homestead Miami race track had been taken over by robots. Some hung from racks, their humanoid feet dangling above the ground as roboticists wheeled them out of garages. One robot resembled a gorilla, while another looked like a spider; yet another could have been mistaken for a designer coffee table. Teams of engineers from MIT, Google, Lockheed Martin, and other institutions and companies replaced parts, ran last-minute tests, and ate junk food. Spare heads and arms were everywhere.

 

 

It was the start of the Robotics Challenge Trials, a competition put on by the Defense Advanced Research Projects Agency (DARPA), the branch of the US Department of Defense dedicated to high risk, high reward technology projects. Over a period of two days, the machines would attempt a series of eight tasks including opening doors, clearing a pile of rubble, and driving a car.

 

 

The eight robots that scored highest in the trials would go on to the finals next year, where they will compete for a $2 million grand prize. And one day, DARPA says, these robots will be defusing roadside bombs, surveilling dangerous areas, and assisting after disasters like the Fukushima nuclear meltdown.

 

 

Mark Gubrud, a former nanophysicist and frumpy professor sort, fit right in with the geeky crowd. But unlike other spectators, Gubrud wasn’t there to cheer the robots on. He was there to warn people.

 

 

“DARPA’s trying to put a face on it, saying ‘this isn’t about killer robots or killer soldiers, this is about disaster response,’ but everybody knows what the real interest is,” he says. “If you could have robots go into urban combat situations instead of humans, then your soldiers wouldn’t get killed. That’s the dream. That’s ultimately why DARPA is funding this stuff.”

 

As the US military pours billions of dollars into increasingly sophisticated robots, people inside and outside the Pentagon have raised concerns about the possibility that machine decision will replace human judgment in war.

Around a year ago, the Department of Defense released directive 3000.09: “Autonomy in Weapons Systems.” The 15-page document defines an autonomous weapon — what Gubrud would call a killer robot — as a weapon that “once activated, can select and engage targets without further intervention by a human operator.”

The directive, which expires in 2022, establishes guidelines for how the military will pursue such weapons. A robot must always follow a human operator’s intent, for example, while simultaneously guarding against any failure that could cause an operator to lose control. Such systems may only be used after passing a series of internal reviews.

“IT’S A VETO POWER THAT YOU HAVE ABOUT A HALF-SECOND TO EXERCISE. YOU’RE MID-CURSE WORD.”

The guidelines are sketchy, however, relying on phrases like “appropriate levels of human judgment over the use of force.” That leaves room for systems that can be given an initial command by a human, then dispatched to select and strike their targets. DARPA is working on a $157 million long-range anti-ship missile system, for example, that is about as autonomous as an attack dog that’s been given a scent: it gets its target from a human, then seeks out and engages the enemy on its own.

Some experts say it could take anywhere from five to thirty years to develop autonomous weapons systems, but others would argue that these weapons already exist. They don’t necessarily look like androids with guns, though. The recently tested X-47B is one of the most advanced unmanned drones in the US military. It takes off, flies, and lands on a carrier with minimal input from its remote pilot. The Harpy drone, built by Israel and sold to other nations, autonomously flies to a patrol area, circles until it detects an enemy radar signal, and then fires at the source. Meanwhile, defense systems like the US Phalanx and the Israeli Iron Dome automatically shoot down incoming missiles, which leaves no time for human intervention.

“A human has veto power, but it’s a veto power that you have about a half-second to exercise,” says Peter Singer, a fellow at the Brookings Institute and author of Wired for War: The Robotics Revolution and Conflict in the 21st Century. “You’re mid-curse word.”

Ib3c9653

Gubrud, an accomplished academic, first proposed a ban on autonomous weapons back in 1988. He’s typically polite, but talk of robotics brings out his combative side: he approached DARPA director Arati Prabhakar at one point during the challenge and tried to get her to admit that the agency is developing autonomous weapons.

He may have been the lone voice of dissent among the hundreds of robot-watchers at DARPA’s event, but Gubrud has some muscle behind him: the International Committee for Robot Arms Control (ICRAC), an organization founded in 2009 by experts in robotics, ethics, international relations, and human rights law. If robotics research continues unchecked, ICRAC warns, the future will be a dystopian one in which militaries arm robots with nuclear weapons, countries start unmanned wars in space, and dictators use killer robots to mercilessly control their own people.

AUTONOMOUS ROBOTS BY DESIGN

HumanoidHumanoid

Most robotic weapons in the military don’t have hands or faces; they tend to look more like airplanes or small tanks. Android soldiers are still unnerving to imagine, though, because they do resemble people. Such robots are useful in disaster response because rescue takes place in human environments, DARPA says. But robots that can walk around in post-earthquake rubble would also be suited for combat in places like Baghdad and Kandahar.

UnderwaterUnderwater

It’s harder to accidentally kill civilians when you’re underwater, and for that reason we may see autonomous weapons proliferate there before we see them on land. Autonomous undersea weapons could range from sea mines and torpedoes to submarines known as autonomous underwater vehicles (AUVs). Radio signals also don’t travel well in water, creating an incentive for programming greater autonomy.

LandLand

The most autonomous weapons currently in use are land-defense systems that react to incoming threats. Examples include the “close-in weapons systems” employed by the US to shoot down missiles, or the Samsung Techwin SGR-A1, which is replacing human guards along parts of the South Korean border. That robot detects when a person enters its range and asks for a password. If a person offers the wrong one, the SGR-A1 can be set to fire automatically.

Concern about robot war fighters goes beyond a “cultural disinclination to turn attack decisions over to software algorithms,” as the autonomy hawk Barry D. Watts put it. Robots, at least right now, have trouble discriminating between civilians and the terrorists and insurgents who live among them. Furthermore, a robot’s actions are a sum of its programmer, operator, manufacturer, and other factors, making it difficult to assign responsibility if something does go wrong. And finally, replacing soldiers with robots would convert the cost of war from human lives to dollars, which could lead to more conflicts.

ICRAC and more than 50 organizations including Human Rights Watch, Nobel Women’s Initiative, and Code Pink have formed a coalition calling itself the Campaign to Stop Killer Robots. Their request is simple: an international ban on autonomous weapons systems that will head off the robotics arms race before it really gets started.

“TIRELESS WAR MACHINES, READY FOR DEPLOYMENT AT THE PUSH OF A BUTTON, POSE THE DANGER OF PERMANENT … ARMED CONFLICT.”

There has actually been some progress on this front. A United Nations report in May, 2013 called for a temporary ban on autonomous lethal systems until nations set down rules for their use. “There is widespread concern that allowing lethal autonomous robots to kill people may denigrate the value of life itself,” the report says. “Tireless war machines, ready for deployment at the push of a button, pose the danger of permanent (if low-level) armed conflict.”

The UN Convention on Certain Conventional Weapons will convene a meeting of experts this spring, the first step toward an international arms agreement. “We need to have a clear view of what the consequences of those weapons could be,” says Jean-Hugues Simon-Michel, the French ambassador to the UN Conference on Disarmament and its chairman, who persuaded the other nations to take up the issue. “And of course when there is a particular concern with regard to a category of weapons, it’s always easier to find a solution before those weapons exist.”

Watching the robots stumble around the simulated disaster areas at the DARPA trials would have been reassuring to anyone worried about killer robots. Today’s robots are miracles of science compared to those from 20 years ago, but they are still seriously impaired by lousy perception, energy inefficiency, and rudimentary intelligence. The machines move agonizingly slowly and wear safety harnesses in case they fall, which happens often.

The capabilities being developed for the challenge, however, are laying the groundwork for killer robots should we ever decide to build them. “We’re part of the Defense Department,” DARPA’s director, Arati Prabhakar, acknowledges. “Why do we make these investments? We make them because we think that they’re going to be important for national security.” One recent report from the US Air Force notes that “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems and processes.”

“IF WE CAN PROTECT INNOCENT CIVILIAN LIFE, I DO NOT WANT TO SHUT THE DOOR ON THE USE OF THIS TECHNOLOGY.”

By some logic, that might be a good thing. Robot shooters are inherently more accurate than humans, and they’re unaffected by fear, fatigue, or hatred. Machines can take on more risk in order to verify a target, loitering in an area or approaching closer to confirm there are no civilians in the way.

“If we can protect innocent civilian life, I do not want to shut the door on the use of this technology,” says Ron Arkin, PhD, a roboticist and ethicist at the Georgia Institute of Technology who has collaborated extensively with Pentagon agencies on various robotics systems.

Arkin proposes that an “ethical governor,” a set of rules that approximates an artificial conscience, could be programmed into the machines in order to ensure compliance with international humanitarian law. Autonomy in these systems, he points out, isn’t akin to free will — it’s more like automation. During the trials, DARPA deliberately sabotaged the communications links between robots and their operators in order to give an advantage to the bots that could “think” on their own. But at least for now, that means being able to process the command “take a step” versus “lift the right foot 2 inches, move it forward 6 inches, and set it down.”

“When you speak to philosophers, they act as if these systems will have moral agency,” Arkin says. “At some level a toaster is autonomous. You can task it to toast your bread and walk away. It doesn’t keep asking you, ‘Should I stop? Should I stop?’ That’s the kind of autonomy we’re talking about.”

Ib3c0057

“No one wants to hear that they’re building a weapon,” says Doug Stephen, a software engineer at the Institute for Human and Machine Cognition (IHMC) whose team placed second at DARPA’s event. But he admits that the same capabilities being honed for these trials — ostensibly to make robots good for disaster relief — can also translate to the battlefield. “Absolutely anything,” Stephen says, “can be weaponized.”

His team’s robot, a modification of the humanoid Atlas built by Boston Dynamics, earned the most points in the least amount of time on several challenges, including opening doors and cutting through walls. When it successfully walked over “uneven terrain” built out of cinder blocks, the crowd erupted into cheers. Stephen and his team will now advance to the final stage of the challenge next year — alongside groups from institutes including MIT and NASA — to vie for the $2 million prize.

That DARPA funding could theoretically seed the rescue-robot industry, or it could kickstart the killer robot one. For Gubrud and others, it’s all happening much too fast: the technology for killer robots, he warns, could outrun our ability to understand and agree on how best to use it. “Are we going to have robot soldiers running around in future wars, or not? Are we going to have a robot arms race which isn’t just going to be these humanoids, but robotic missiles and drones fighting each other and robotic submarines hunting other submarines?” he says. “Either we’re going to decide not to do this, and have an international agreement not to do it, or it’s going to happen.”