Lethal Autonomous Weapons: Issues for the International Community

Original photo source: E&T

On May 13-16 a United Nations (UN) expert meeting will discuss ‘questions relating to emerging technologies’ in lethal autonomous weapon systems. Such systems are distinguished by being mobile and selecting targets autonomously without direct human supervision. This type of expert meeting represents the lowest rung of the UN ladder. The Chair of the meeting will simply write up a report to be presented later in 2014 to the annual discussions by States on the Convention on Certain Conventional Weapons. But the expert meeting in May could be the start of a process which might see the development of new national and international law to regulate or prohibit the use of artificial intelligence without human supervision in weapon systems. 

Campaign to Stop Killer Robots

The Campaign to Stop Killer Robots was launched in April 2013 with the objective of achieving a ban on the development, production and deployment of fully autonomous weapons. During that month, the UN’s Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions wrote a report in which he called for a moratorium on the development and deployment of lethal autonomous robots while an international commission considers the issue. As such the issue has proceeded at a very fast pace for international diplomacy, and for arms control in particular. Whatever happens in the expert meeting and subsequent discussions at the UN, the use of machines to make life and death decisions is an issue which will not go away.

Prohibition or International Law

The most important task for the expert meeting, and the subsequent debate in the UN and elsewhere, will be to clarify what exactly should be the subjects of new laws and regulations on lethal autonomous weapons, what type of rules should apply (if any), and how they should be implemented. The campaign has called for a simple prohibition, but another approach might be to establish international laws which regulate the circumstances by which lethal autonomous weapons can be used. This is the case with the Geneva Conventions and subsequent laws which regulate how other conventional weapons can be used on the battlefield.
This attempt at clarification is complicated by three factors.
First, we are dealing with an emerging technology. What is cutting edge now may well become obsolete during in the many years that it can take to negotiate an international treaty. A simple prohibition would be least affected by the rapid pace of technological innovation. But attempts to devise regulations could be stymied by new inventions.
Second, the issues are as philosophical as they are technological. The campaign objects to ‘killer robots’ on the grounds that machines cannot make the ethical and contextual decisions which humans are capable of. A robot would be unable to distinguish between a soldier who was advancing in order to attack, and one walking forward while surrendering. Attempts to regulate or prohibit autonomous weapons then raises profound questions regarding when decisions are being made and the extent to which that happened autonomously, without human supervision. Current development of the technology suggests that autonomy is being included in relatively narrow contexts– such as a missile which during a flight time of a few seconds selects a target on a limited area of the battlefield. An autonomous weapon won’t look like the robots found in science fiction. We won’t be able to talk to it and ask it ethical questions. Instead autonomy will be embedded in weapons and decisions made in contexts which humans could not operate – computer algorithms could be expected to identify and select a target within a fraction of a second, and routinely take a course of action which would result in their self-destruction. There is a need to chart out new theories ethics of actions during warfare which are focused upon the decisions that may be taken by machines rather than the more familiar dilemmas faced by human soldiers.
Finally, these discussions have profound financial implications. Defence ministries want to use robots because they save money – machines don’t require pay or pensions – and reduce the human price of military interventions. Companies stand to make contracts worth billions of they successfully bid to produce new robotic technology, or to lose out if their traditional equipment falls out of favour. Thus far the discussions have occupied the somewhat rarified intellectual terrain of academic journals and NGO campaigning. But if the momentum for a ban picks up, then some very well resourced military and industrial interests may well enter the fray. This topic and other key security issues will be discussed at SDA’s annual conference on June 4th ahead of the NATO Summit.
This post was first published as a guest contribution in Security & Defense Agenda 9 May 2014. A Norwegian version of the contribution was published 12 May at Ytring.no.
Share this: