Stop the Killer Robots: an Interview With Professor Noel Sharkey

Photo Credits: stopkillerrobots.org
Photo Credits: stopkillerrobots.org

Earlier this month, members of the Campaign To Stop Killer Robots briefed UN Secretary-General Ban Ki-moon’s Advisory Board on Disarmament Issues on the emergent threat of autonomous weapons systems and killer robots. On 7 March, Ban Ki-moon spoke to the advisory board personally and encouraged a quick resolution on the concerns expressed internationally regarding the development of such weaponry.

Concerns over the development of autonomous weapons were expressed months ago. On 27 July 2013, the same advisory board released a report which warned that “the increasing trend towards automation of warfare and the development of fully autonomous weapons systems (also referred to as lethal autonomous robotics, LARs, or killer robots) gave rise to a wide range of legal, ethical or societal concerns that had to be addressed”. The report also stressed “the ethical limits in deciding on the life of death of a human”.

The Campaign To Stop Killer Robots – among other groups – have campaigned assiduously to lobby governments to stop the development of these unprecedented weapons, which pose a serious and potentially fatal threat to human life. In November 2012, Human Rights Watch published Losing Humanity: The Case Against Killer Robots, which called for a ban on autonomous systems. On November 15, an agreement by 117 member states at the Convention on Conventional Weapons (CCW) passed a mandate to begin international discussion on the use of these systems, beginning with a 4 day expert workshop at the UN in Geneva in May which, they hope, will lead to Protocol VI banning the development, testing and use of these systems. On 27 February a resolution of the European Parliament, which passed by a vote of 534-49, limited the use of armed drones and, importantly, called for a ban on killer robots.

The Global Oyster recently interviewed Professor Noel Sharkey, a resounding voice in the debate over “killer robots”.  Professor Sharkey is Professor of Artificial Intelligence and Robotics and Professor of Public Engagement at the University of Sheffield, co-founder and chair of the International Committee for Robot Arms Control (ICRAC), and a spokesperson for the Campaign To Stop Killer Robots.   

Source: http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/
Source: http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/

Q. What do you mean precisely by “killer robots”?

A. Killer robots are autonomous weapons systems, which means that once they are activated, they can select their own targets and kill them without human supervision. In other words the kill decision is being delegated to a computer program.

Q. What are the main ethical and legal ramifications of the development of fully autonomous weapon systems or “killer robots”?

A. The technology is not fit for purpose to comply with international humanitarian and human rights law, such as the Geneva Convention, and two important ethical and legal principles known as “the Principle of Distinction” and “the Principle of Proportionality”.

The Principle of Distinction means that we must, under all circumstances, be able to distinguish between a military and civilian target, for example between a civilian and a combatant, or a wounded or mentally ill combatant. There are no robots or computer systems capable of doing this, particularly in an insurgent conflict. Computer sensing and vision systems are not up to the task. Even if they were, it requires human judgement to determine when it is appropriate to kill.

The principle of proportionality means that any collateral damage to the proper or to the civilian population — either killing or maiming — must be directly proportionate to military advantage. There is no clear objective definition of military advantage. It is up to a very experienced commander with situational awareness to decide on whether an attack is proportionate or not. No computer system is up to this task and will not be in the foreseeable future. If commanders get the decision wrong they can be held accountable. A robot cannot be held accountable.

Q.  Is there not also potential for the weaponry to malfunction?

A. Of course. The weapons muddle the whole chain of accountability. Mishaps could be caused by faulty or damaged sensors, a computer crash, spoofing or gaming by the enemy, cyber-attack or infiltration into the industrial supply chain.

Q. How would the introduction of “killer robots” change conventional warfare?

A. It would disrupt international security. Without the threat of death or harm to a country’s forces, it could make starting conflicts much easier. In particular, public resistance would be lowered. Autonomous weapons could trigger unintentional conflicts. If another country was using autonomous robots, no one could predict how they would interact when they met and fought. This could cause great harm to the civilian population. Once these weapons are deployed there will be mass proliferation, as we have seen in the case of drones with more than 80 countries now having them.

Q. With the introduction of drones and UAVs into conventional military arenas, has warfare, as we know it, already changed significantly?

A. There are currently only autonomous defensive weapons systems. These include the US Phalanx system for shooting down incoming attacks on US war ships, the German Mantis system for shooting down incoming mortar fire, and the Iron Dome for shooting down missiles. There are also flying loitering munitions, such as the Israeli Harpy, for autonomously attacked antiaircraft installations by identifying their radar signals.

But there are strong developments toward autonomous weapons systems that may reach fruition over the next decade from the UK, US, Russian, China, Israel and South Korea. If we do not pre-emptively ban autonomous weapons systems before there is billions of dollars of investment and subsequent mass proliferation, it will revolutionise war by turning it into a clinical factory of automated killing.

Q. In your personal opinion, what is the main reason to stop the development of killer robots?

A. It is morally obnoxious to delegate the decision to kill a human to a computer program. It would cross a fundamental moral line to allow the decision to kill a human to be delegated to a computer program. It is the ultimate human indignity to have a machine kill a human without a person making that decision.

With thanks to Professor Noel Sharkey.

For more information on the Campaign To Stop Killer Robots, visit: www.stopkillerrobots.org.

Interview by Jamie Pinnock.

Advertisements

2 thoughts on “Stop the Killer Robots: an Interview With Professor Noel Sharkey

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s