To enjoy the full Mail & Guardian online experience: please upgrade your browser
11 Apr 2018 14:36
Artificial intelligence technology is being developed at an ever-increasing speed, and several high-tech military powers are investing large sums in autonomous weapons systems. (Campaign To Stop Killer Robots’ animation film)
As nations convene at the United Nations in Geneva this week to continue deliberations on “lethal autonomous weapons systems” or “killer robots”, it is clear that the diplomatic process is moving too slowly.
If we are to avoid a future where robots decide who gets to live and who dies, there is no time or money to waste — governments must act now.
Lethal autonomous weapons systems — which take human beings out of the decision-making loop — have the potential to become the “third revolution” in warfare, after gunpowder and nuclear weapons.
Artificial intelligence technology is being developed at an ever-increasing speed, and several high-tech military powers, including the United States, China, Israel, South Korea, Russia and the United Kingdom, are investing large sums in autonomous weapons systems with decreasing levels of human control.
Such weapons systems would entail that machines are permitted to determine who or what to target on the battlefield, or in policing, border control and other circumstances, without any further human intervention. This would mark a fundamental change in the nature of warfare, and raises a number of profound ethical, human rights, legal, technical and other concerns.
It is unlikely that fully autonomous weapons will be able to distinguish properly between combatants and civilians — one of the fundamental rules of war.
Nor will machines, however sophisticated, have moral agency or be able to make sound judgments in the complex circumstances that constitute a battlefield.
Professor Christof Heyns of the University of Pretoria pointed out many of these concerns in his UN Special Rapporteur report on killer robots in 2013.
Furthermore, such weapons could possibly be used in “a domestic law enforcement situation” and/or “used by states to suppress domestic enemies and to terrorise the population at large”.
Lethal autonomous robotics could be “intercepted and used by non-state actors, such as criminal cartels or private individuals”.
Roboticists and AI experts have long warned of the dangers of lethal autonomous weapons systems, and of a new global arms race. In several open letters they have urged the UN to negotiate an international treaty that prohibits their production and use. Nobel laureates and civil society groups around the globe are actively working to promote such a ban.
In 2014, Archbishop Emeritus Desmond Tutu and other African faith leaders joined the call for a ban by signing a joint statement issued by more than 160 religious leaders and organisations, saying “robotic warfare is an affront to human dignity and to the sacredness of life”.
Although the technology involved in autonomous weapons may be complicated, the question countries now need to take a firm stand on is really quite simple: Should machines be allowed to make life-and-death decisions?
Within the international community, there is increasing support for the notion that new international legislation is needed, and many nations have acknowledged the need to retain meaningful human control over the use of force involving autonomous weapons.
But the process is moving very slowly, and time is running out.
As technology is developing ever faster, both nations and private companies are now investing heavily in weapons systems with decreasing levels of human control.
Ahead of this UN meeting, 22 nations had officially called for an international ban on lethal autonomous weapons systems. Among them were five African countries — Algeria, Egypt, Ghana, Uganda and Zimbabwe.
South Africa made a strong statement on behalf of the African Group during the Convention on Conventional Weapons meeting on lethal autonomous weapons systems at the UN in Geneva on April 9, calling for a ban. This is very promising.
Some countries at the convention are arguing for non-legally binding political declarations, but measures that fall short of a legally binding treaty will be insufficient to prevent a future of killer robots.
It is therefore of utmost importance that South Africa and other like-minded countries not only adopt national legislation banning fully autonomous weapons systems, but also push for an international ban and convince other countries to join them in this effort.
With enough political pressure, a legally binding instrument prohibiting the development, production and use of fully autonomous weapons systems could be concluded by the end of next year.
As the letter from the 160 faith leaders put it: “Whereas other weapons were only banned after their use showed to have grave humanitarian consequences, we now have a chance to stop these atrocious weapons before they enter the battlefield.”
But we must act, and we must act now.
Dr Thompson Chengeta is a fellow at the South African Research Chair in International Law at the University of Johannesburg
See the Campaign To Stop Killer Robots’ animation film:
Read more from Thompson Chengeta
Create Account | Lost Your Password?