Friday, September 25, 2009

ETHICS OF AUTONOMOUS 'LETHAL' ROBOTS

Last night I watched with interest Rodney's Robot Revolution screened on ABC1 TV. It outlined the extensive work being undertaken in the field of robotics by Australian-born roboticist Professor Rodney Brooks. The featured robots expressed endearing humanoid body shapes and facial responses, however, the most disturbing aspect of the work for me was progress by roboticist's towards the construction of autonomous 'lethal' robots that blindly follow programmed responses (killing) whilst on the battlefield. Theoretically soldiers also learn to blindly follow orders, because to do otherwise would jeopardise the whole mission. How then is an autonomous 'lethal' robot different to a soldier trained to kill his/her enemy?
According to Ronald C. Arkin, one of the differences is that 'due to the propensity of soldiers to tolerate and commit illegal acts under wartime conditions, that ultimately autonomous systems can outperform humans ethically in the battlefield'. (See link to his article below). John Wiseman, drawing extensively from Ron Arkin's writing encapsulates a number of reasons why autonomous systems might perform more ethically than their human counterparts. He lists the following as reasons why (See his article listed below):

1.The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. UxVs do not need to have self-preservation as a foremost drive, if at all. They can be used in a self-sacrificing manner if needed and appropriate without reservation by a commanding officer,
2.The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans’ currently possess.
3.They can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. In addition, “Fear and hysteria are always latent in combat, often real, and they press us toward fearful measures and criminal behavior” [Walzer 77, p. 251]. Autonomous agents need not suffer similarly.
4.Avoidance of the human psychological problem of “scenario fulfillment” is possible, a factor believed partly contributing to the downing of an Iranian Airliner by the USS Vincennes in 1988 [Sagan 91]. This phenomena leads to distortion or neglect of contradictory information in stressful situations, where humans use new incoming information in ways that only fit their pre-existing belief patterns, a form of premature cognitive closure. Robots need not be vulnerable to such patterns of behavior.
5.They can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time. This can arise from multiple remote sensors and intelligence (including human) sources, as part of the Army’s network-centric warfare concept and the concurrent development of the Global Information Grid.
6.When working in a team of combined human soldiers and autonomous systems, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting infractions that might be observed. This presence alone might possibly lead to a reduction in human ethical infractions.

Science fiction films (such as Alien Resurrection, AI: Artificial Intelligence & Bicentennial Man) have shown that machines and alien others often behave more ethically than their human counterparts.

Although autonomous 'lethal' robots are speculative at this stage, recent advances, including armed robots (very much like bomb disposal robots with semi-automatic weapons attached) and drones (unmanned airplanes) are already deployed in the theater of war.

The ethics surrounding use of autonomous 'lethal' cyborgs that can't die, against human beings who are obviously vulnerable to attack was a question raised in Rodney's Robot Revolution, as was the question surrounding who would be responsible (the robot's certainly wouldn't be) if a mission went horribly wrong? It is not only machines that become out of our control, but we tend to project our fears about 'things out of our control' onto machines!

Obviously replacing human beings with cyborgs is one way of avoiding trauma and death experienced by soldiers on the battlefield. Indeed, if all battles were undertaken solely by the use of unmanned or inhuman machines, then it might be an equal state of play, but what if one side was human and the other machine?

Are we moving towards a Terminator (James Cameron, 1984) future, which depicted humanity fighting a loosing battle against computer-controlled machines? More important for me is the fact that as Arkin says: 'The threshold of entry into warfare may be lowered as we will now be risking machines and fewer human soldiers – this could violate the Jus ad Bellum (Justice of War or Laws of War) conditions of just warfare'.
http://www.dtic.mil/cgi-bin/GetTRDoc%20AD=ADA493429&Location=U2&doc=GetTRDoc.pdf
http://www.cc.gatech.edu/ai/robot-lab/online-publications/ArkinMoshkinaISTAS.pdfhttp://lemonodor.com/archives/2008/02/ethics_in_lethal_robots.html
Ethics of autonomous 'lethal' robots
Armed robots on patrol

9 comments:

  1. Arkin gets it wrong. His imaginings of the robotic battlefield as "more ethical" are laughable; mechanized infantry have left already a long, unethical scar on Western Culture. Arkin's descriptions of ethical autonomous systems resonate with West-Bank portraits of suicide bombers, and rationales for deploying them in conflict. Longing for machines more civilized than ourselves are old and in modern times voiced by Donna Haraway and deconstructed by Elaine Scarry as human longing for objects that reflect the civilization we think we are, or would like to be, if we only could.

    ReplyDelete
  2. Of course, this whole question of ethics is paramount and you make an intesting point as to whether anything 'mechanical' (rational, programmed, ordered) can in fact be 'ethical'. Indeed looking at what we consider ethical in war or in peace is vital to this issue. I've just read the differences between 'Jus ad Bellum' and 'Jus in Bello' - whilst the former 'serves principles to determine when war and the use of violence are justifiable', the latter requires that 'the agents of war be held responsible for their actions'. An autonomous 'lethal' robot could not be held responsible for its actions, but those who deploy it could. However, authorities would always have the fallback position when anything went wrong, that the machine 'was out of their control'!
    The question for me is not whether autonomous 'lethal' robots are capable of being 'ethical', rather, whether using them at all is 'ethical', since these robots cannot contain and capture human solders, they can only kill.
    (See articles on Jus ad Bellum and Jus in Bello at: http://www.beyondintractability.org/essay/jus_in_bello/

    ReplyDelete
  3. I would argue war is inherently unethical because it involves harming others.
    Both of the ethical systems you mention are inventions of the victors of wars. German, Japanese, Iraqi ethics were lost when those cultures were forced to remake themselves in defeat.

    ReplyDelete
  4. Ethics is a difficult terrain to navigate.
    Is it unethical to protect your family from people who wish to harm them? I think not. I remember having this discussion when my son was a child. I was anti-war and anti-guns, but had to admit that I woud take up arms to defend my child and anyone that I loved, against attack. Obviously we take a different ethical stance depending on the situation. My concern here, in the first instance was whether taking 'human' emotion out of the equation (which is what would happen with autonomous 'lethal' robots) is really the way to more forward?

    ReplyDelete
  5. I find it very difficult to talk about ethics and war simultaneously, because I believe they are mutually exclusive.
    I mentioned ethics are specified by the victors in war. One example that comes to mind is German claims of American war crimes during 1944-45, as the US army fought its way through Germany's West Wall using 155mm howitzers to destroy hard points in urban areas. German commanders claimed, as Palestinian ones have recently, that the use of heavy artillery in civilian neighbourhoods is unethical. However, such assertions are ridiculed by the (victorious) US and Israeli armies.
    I would be far more comfortable discussing ethics, which have been very useful to me during my media career, in a less extreme context.

    ReplyDelete
  6. Great to see someone still understands our armed forces are at war, and is thinking about the issues. Most of Australia is happily drinking lattes and chatting about last weekend. Re robot soldiers - if the human mind is able to take on the entire US security system with stanley knives, and win, or almost bring down multiple airliners using dummy drink bottles (UK),overcoming robots (regardless of the ethics)is not a problem. Lets face it,currently western soldiers cannot distinguish the 'enemy'from civilians, what hope do robots have?

    ReplyDelete
  7. Steve - yes, you make a valid point. It IS generally the victors who determine what is and isn't 'ethical' in war. But I suppose the question for me is still 'How do (we) ensure that a concern for human life, particularly civilians & the innocent is programmed into an autonomous 'lethal robot? Is this at all possible, or is the very 'nature' of a ALR to only KILL. As Peter said: 'solders cannot distinguish the enemy from civilians, so what hope do robots have' since they are programmed by humans? Is it really just a problem of 'recognition' or are our super-powers willing to accept that the death of civilians is just 'collatoral damage' and consequence of war. Am I just being idealistic when I ask: How do we level the playing field in war when one side may use robots and the other side vulnerable humans. Yes, minor technologies - box cutters, bombs made using plastic bottles and explosives & ingenuity have destroyed human life, planes & buildings (9/11)and this does show that the human is capable of combating more sophistocated technologies - so is this the way it is going to be - brains against brawn, the human against the machine? Which brings me to my next point - are we looking in the wrong direction, is a concern with 'autonomous lethal robots' misguided, should we be more concerned about those (terrorists) who use lesser (undeveloped) technologies to wage their war against innocent people?

    ReplyDelete
  8. With respect Julie, I find the discussion re using 'autonomous lethal robots' a mite spurious. ALR's (if developed) would be used as just another lethal weapon in the arsenal of the 'developed' world. 'Ethics' is always considered well after the event (Hiroshima?). Niether do I think we should be "concerned about those (terrorists) who use lesser (undeveloped) technologies to wage their war against innocent people?" I mean, lets face it WE are killing innocent people right now.

    What we should be considering is; we KNOW firepower did not win in Vietnam or bring down the Berlin wall. What really wins 'hearts and minds' is communication and understanding and trade between ordinary people. e.g. youths of the former USSR were desperate to listen to Western music and understand the West. In fact a huge black market for illegal 'backyard' Beatles recordings on acetate existed behind the wall from the 1970's. The arts DID have a role. My gut feeling is young Iranians
    esp. feel likewise. Instead all we see is threats by leaders.
    The answer is probably the technology under your fingers

    ReplyDelete
  9. Peter, thanks, but you don't have to be polite your viewpoint is totally valid. I like to be challenged on what I think and the way that I approach things.
    I think one of the other points that came out of 'Rodney's Revolution' was the need to discuss ethics and you're right, one way of doing this is by using communications technologies & I suppose that's part of the reason why I began this blog in the first place. So, in order to continue the discussion what might be the next question posed. Any ideas?

    ReplyDelete