(ebooks) OWNI shop

politics

Ethical Machines in War: An Interview With Ronald Arkin

The introduction of robots in war changes the premises of warfare and radically redefines the soldiers’ experience of “going to war”.

by Sofia Karlsson On April 25, 2011

33 Reactions
facebook share mail email A+ A-

Related posts

The latest to emerge from within military technology are the unmanned weapon systems; remote controlled robots that are replacing human soldiers in war. The introduction of robots in war changes the premises of warfare and radically redefines the soldiers’ experience of “going to war”. When the US military went into Iraq in 2003, it employed only a few robotic planes, so called drones. Today, thousands of Unmanned Aerial Vehicles (UAV’s) and Unmanned Ground Vehicles (UGV’s) are being used in mainly Iraq and Afghanistan, but also in Pakistan and lately for the military intervention in Libya.

Most robots are unarmed and used for surveillance, reconnaissance, or destruction of mines and other explosive devices. However, in the past years, there has been a dramatic increase in the use of armed robots in combat. The new technology permits soldiers to kill without being in any danger, further increasing the distance from the battlefield and the enemies.

Developing unmanned systems is top priority in the US Army. Robots save soldiers’ lives, and as they require no life support for soldiers, or safety system, they are also cost efficient. According to the US Department of Defense Roadmap 2009-2034, armed robots are moving into every domain within the army. UGV’s are projected to conduct dismounted offensive operations, armed reconnaissance and assault operations. Future UAV’s will have the capacity to carry out air-to-air combat, and the Navy is developing Unmanned Underwater Vehicles (UUV’s), particularly suited for laying and neutralizing mines. At present, all robots operate under human supervision, however the Defense Roadmap states, “unmanned systems will progress further with respect to full autonomy”.

Renowned robotocist Ronald Arkin argues that robots in war could both save soldiers’ lives and ultimately reduce civilian casualties. The emotional stress that the war brings on soldiers often results in unethical and sometimes violent behavior. A survey of soldiers in Iraq, by the US Army Surgeon General’s Office in 2006, showed that soldiers with high levels of anger were twice as likely to mistreat non-combatants as those who had low levels of anger. Less than half agreed that non-combatants should be treated with dignity and respect and approximately 10% admitted that they had damaged or destroyed property, or hit a civilian when not necessary. Arkin proposes that robots can be engineered with ethical intelligence, which would ultimately make them perform better than humans in war situations.

Arkin is professor at the College of Computing at the Georgia Institute of Technology. He serves as a consultant for several major companies in the area of intelligent robotic systems and speaks regularly to the US Military, the Pentagon, NGO’s and governmental agencies. He talks to OWNI on the new robotic technology and prospects for creating robots governed by ethical principles.

OWNI: As the capabilities of unmanned of the unmanned weapon systems are advancing, we may soon have fully autonomous systems in battle. How long do you think it will take until we have machines that act and kill on their own?

Arkin: I want to argue that they already exist. Simple systems such as landmines can be defined as a robotic system because they sense the world and they actuate, in that case through their explosions. Systems such as the Patriot Missile, the Aegis Class Cruisers, and the sea Captor mine all have the ability to target and engage without additional human intervention after being turned on. So to some degree they are already here. It is just a question of how much more will be introduced to the battle space in the near term.

Autonomy is a little bit insidious. It is kind of creeping up on us. It is not going to be something that is just not here today, and then here tomorrow. If you look at diagrams that are included in the military service acquisition there is a smooth continuous curve in multiple stages, which shows the progression on levels of autonomy. The notion is that decision making is going to be pushed further and further to what we call the tip of the spear – towards the actual machine itself and less and less on the human side for the immediate decision making of taking a life.

OWNI: You propose that robots can be engineered with ethical software, adhering to the Laws of War, but lacking emotions such as anger and fear. How human can machines be made?

Arkin: It is not a question of making them human per se. It is a question of making them adhere to what human beings have defined as the ethical way to conduct war, as oxymoronic as that sounds. If you look at the international laws of war and the conventions in Geneva you find what human beings have agreed upon as the ethical way to kill each other in warfare. I am not happy about that but if we are going to introduce robotic systems into these kinds of domains we must ensure that they adhere to the same rules of laws that we expect of our human war fighters.

The interesting thing is that in the battlefield people have been thinking about this problem for thousands of years. There has been broad international discussion and discourse, and there has been agreement upon what is considered to be acceptable and unacceptable. It is fairly well delineated and for a robotocist to try and create an ethical robot it is low hanging fruit compared to figuring out how to create a robot that will appropriately and morally treat your grandmother. These things have been considered by philosophers, military scientists, lawyers, and are codified and in existence. So that is why it is a value.

OWNI: Your article The Case for Ethical Autonomy in Unmanned Systems, in Journal of Military Ethics (December, 2010) brings up the problem with creating human soldiers that fight well, despite training. Combatants may not fight well because they lack a built in aggressiveness and may be incapable of following certain orders. Robots on the other hand would not hesitate to kill for emotional reasons, and they would always follow orders. Some people fear would lead to inhuman wars. How would ethical robots deal with authority?

Arkin: They would not always follow orders. It must be possible for the robot to refuse an order, if it is deemed to be unethical. That is the point. The notion of an ethical governor is to provide an evaluation mechanism that will look at the action the robot is about to undertake, weigh it in light of international agreed upon accords and make a decision to whether it should carry it out.

OWNI: What is your view on the possibilities of creating this type of intelligence today? Is it going to be possible?

Arkin: That argument comes from computer scientist Noel Sharky among others who basically cannot see how it can be done. The same arguments can be used against man flying or man going to the moon. The IBM computer Watson just defeated human beings in the game Jeopardy in terms of intelligence on that particular system. Robots are already smarter and they will have better sensors to be able to detect the battlefield. They will be able to process information much more rapidly than any human being possibly could. In the future, I am convinced that we can indeed create these systems that can perform and outperform human beings from an ethical perspective. An important distinction to keep in mind is that the systems will not be perfect, but they can do better than human beings. That will result in the saving of human lives, which is the goal of the work that I have been involved in.

OWNI: Do you see any risks with your research? Would it be possible to reprogram ethical robots to unethical?

Arkin: Just as you can tell soldiers to act unethically, you could conceivable reprogram robots to override or ignore the rules of war. The important thing is that in designing these systems we should make sure that if anything like that does occurs, it is immediately attributable to whomever made those changes, thus facilitating the prosecution of the individuals who carried those out for war crimes. No one should ever be able to hide behind a robot and say the robot did it. There is always a human being at some point in this chain and we have to make sure that someone is always accountable. We can try to engineer safeguards into the systems. It does not mean that they could not be circumvented, but the point is that it is a whole lot easier to inspect the inside of computer systems and the logging of it than it is to inspect the human mind.

OWNI: Are there any other areas where these ethical robot systems could be deployed?

Arkin: Yes! We should have ethical robots wherever robots are being used. That is what it comes down to. How do you design these kinds of systems where every human are interacting with these particular platforms in ways that is for the betterment of society, and not for the decrement of society? When you start crossing cultures, we have different expectations and different points of views to what is ethically appropriate. The nice thing about the military work is that it is an internationally agreed upon accord. But there are no internationally agreed upon accord on to how to treat your grand father.

OWNI: Why is your research important?

Arkin: I have often said that the discussion that my work engenders is as important, if not more important, than the research itself. We try to raise consciousness about what is going on with these particular systems and it is crucially important to make sure that we as a society fully understand what it is we are doing through the creation of this technology. Even if it this leads to the banning of the use of lethal force by autonomous systems, I’m not averse to that. I do believe we can do better but we just can’t let things move forward without fully understand the consequences of the actions of applying lethal force to these particular systems.


Photo Credits: Flickr CC familymwr, Will Cyr, Walt Jabsco and RDECOM

Follow us onTwitter and on Facebook.