Friday, May 15, 2009

Robot Ethics: Military Robots in Iraq and Afghanistan

Last week, Logan included this link to an excellent Reuters article at the bottom of a long post full of links. Many of you, like me, might have missed it altogether. I caught it on my second read, and I'm really glad I did, because somehow I had no idea that in the all-of-this-has-happened-before timeline, we're about three years from the Cylon uprising. The technology exists, tens of thousands of military robots equipped with lethal weapons and the ability to make autonomous decisions about when to use them have been deployed in conflict sites around the world... and so now we just wait for things to go terribly terribly wrong. Which personally I find really exciting.

The story relies heavily on the report Autonomous Military Robotics: Risk, Ethics, and Design, prepared by California Polytechnic for the US Navy Office of Naval Research. The report summarizes most of the military robot development of the past six years, and breaks down some of the current and potential ethical dilemmas that are inescapable whenever we have autonomous or semi-autonomous machines equipped with lethal weaponry…

It’s a huge document, and consistently fascinating. Here are some highlights from the first 25 pages… more excerpts, and maybe even my summary thoughts, will follow, both here and on my own website.

=========================================================

“Technology, however, is a double‐edge sword with both benefits and risks, critics and advocates; and autonomous military robotics is no exception, no matter how compelling the case may be to pursue such research. The worries include: where responsibility would fall in cases of unintended or unlawful harm, which could range from the manufacturer to the field commander to even the machine itself; the possibility of serious malfunction and robots gone wild; capturing and hacking of military robots that are then unleashed against us; lowering the threshold for entering conflicts and wars, since fewer US military lives would then be at stake; the effect of such robots on squad cohesion, e.g., if robots recorded and reported back the soldier’s every action; refusing an otherwise‐legitimate order; and other possible harms.”

“First, in this investigation, we are not concerned with the question of whether it is even technically possible to make a perfectly‐ethical robot, i.e., one that makes the ‘right’ decision in every case or even most cases. Following Arkin, we agree that an ethically‐infallible machine ought not to be the goal now (if it is even possible); rather, our goal should be more practical and immediate: to design a machine that performs better than humans do on the battlefield, particularly with respect to reducing unlawful behavior or war crimes [Arkin, 2007]. Considering the number of incidences of unlawful behavior—and by ‘unlawful’ we mean a violation of the various Laws of War (LOW) or Rules of Engagement (ROE), which we also will discuss later in more detail—this appears to be a low standard to satisfy, though a profoundly important hurdle to clear.”

“it is surprising to note that one of the most comprehensive and recent reports on military robotics, Unmanned Systems Roadmap 2007‐2032, does not mention the word ‘ethics’ once nor risks raised by robotics, with the exception of one sentence that merely acknowledges that “privacy issues [have been] raised in some quarters” without even discussing said issues [US Department of Defense, 2007, p. 48].”

In a military context, a robot is “a powered machine that (1) senses, (2) thinks (in a deliberative, non‐mechanical sense), and (3) acts.” “And robots can be considered as agents, i.e., they have the capacity to act in a world, and some even may be moral agents, as discussed in the next definition.”

“the US Army Surgeon General’s Office had surveyed US troops in Iraq on issues in battlefield ethics and discovered worrisome results. From its summary of findings, among other statistics: “Less than half of Soldiers and Marines believed that non‐combatants should be treated with respect and dignity and well over a third believed that torture should be allowed to save the life of a fellow team member. About 10% of Soldiers and Marines reported mistreating an Iraqi non‐combatant when it wasn’t necessary…Less than half of Soldiers and Marines would report a team member for unethical behavior…Although reporting ethical training, nearly a third of Soldiers and Marines reported encountering ethical situations in Iraq in which they didn’t know how to respond” [US Army Surgeon General’s Office, 2006]. The most recent survey by the same organization reported similar results [US Army Surgeon General’s Office, 2008]. Wartime atrocities have occurred since the beginning of human history, so we are not operating under the illusion that they can be eliminated altogether (nor that armed conflicts can be eliminated either, at least in the foreseeable future). However, to the extent that military robots can considerably reduce unethical conduct on the battlefield—greatly reducing human and political costs—there is a compelling reason to pursue their development as well as to study their capacity to act ethically.”

“Perhaps robot ethics has not received the attention it needs, at least in the US, given a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is a sorely outdated, harking back to a time when computers were simpler and their programs could be written and understood by a single person. Now, programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways. (And even straightforward, simple rules such as Asimov’s Laws of Robotics can create unexpected dilemmas [e.g., Asimov, 1950].) Furthermore, increasing complexity may lead to emergent behaviors, i.e., behaviors not programmed but arising out of sheer complexity [e.g., Kurzweil, 1999, 2005].”

“Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn… Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments…”

“Now, technical advances in robotics are catching up to literary and theatrical accounts, so the seeds of worry that have long been planted in the public consciousness will grow into close scrutiny of the robotics industry with respect to those ethical issues…”

“The South Korean [sentry robot] is capable of interrogating suspects, identifying potential enemy intruders, and autonomous firing of its weapon.”

“… as of this writing, none of the fielded systems has full autonomy in a wide context. Many are capable of autonomous navigation, localization, station keeping, reconnaissance and other activities, but rely on human supervision to fire weapons, launch missiles, or exert deadly force by other means; and even the Navy’s CIWS operates in full‐auto mode only as a reactive last line of defense against incoming missiles and does not proactively engage an enemy or target. Clearly, there are fundamental ethical implications in allowing full autonomy for these robots. Among the questions to be asked are:
• Will autonomous robots be able to follow established guidelines of the Laws of War and Rules of Engagement, as specified in the Geneva Conventions?
• Will robots know the difference between military and civilian personnel?
• Will they recognize a wounded soldier and refrain from shooting?”

“We anticipate that autonomy will be granted to robot vehicles gradually, as confidence in their ability to perform their assigned tasks grows. Further, we expect to see learning algorithms that enable the robot to improve its performance during training missions. Even so, there will be fundamental ethical issues. For example, will a supervising warfighter be able to override a robot’s decision to fire? If so, how much time will have to be allocated to allow such decisions? Will the robot have the ability to disobey a human supervisor’s command, say in a situation where the robot makes the decision not to release a missile on the basis that its analysis leads to the conclusion that the number of civilians (say women and children) greatly exceeds the number of insurgents in the house?”

4 comments:

Anonymous said...

Totally missed it, too, Sam. Thanks, although I was thinking about this news of military robotics a bit after the finale. As if Blackwater weren't bad enough.

Charlie said...

See , the finale should have shown military and worker robots instead of the entertainment ones they went with. I'm more scared of these bots capable of firing live ammo then I am an AIBO or ASIMO.

Hieu Le Bui said...

All of this had happened before and it will happen again. Sounds familiar? Maybe RDM is the real Starbuck/Angel and he's telling the story of BSG to warn us little humans.

Darren said...

@Hieu Le Bui... Yeh, probably...