/ 30 July 2015

Bid to ban autonomous killing machines

Stephan Hawking said that he wants to spend his prize money to help his daughter with her autistic son
Stephan Hawking said that he wants to spend his prize money to help his daughter with her autistic son

It is a characteristic of technological development for humans to get machines to do things that they don’t want to, whether it is washing the dishes, mowing the lawn or walking long distances to get somewhere.

But can we outsource our killing to machines?

The more than 10 000 signatories of the “Autonomous Weapons: an Open Letter from AI [Artificial Intelligence] & Robotics Researchers” say no. Signatories include SpaceX founder Elon Musk, Nobel prize-winning physicist Stephen Hawking and Apple co-founder Steve Wozniak.

The letter reads: “Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain predefined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

This question – whether it is ethical to deploy killing robots – was heatedly debated at last year’s EuroScience Open Forum.

Autonomous weapons are weapons for which, “as soon as someone pulls the lever, the computer is in charge”, said Noel Sharkey, a professor emeritus of artificial intelligence and robots at the University of Sheffield in Britain.

“It will seek out a target, track it, select it and kill it [without human intervention].” It was against a person’s right to “life and dignity to have a machine kill you. We are delegating killing to a machine, rather than another human.”

Robert Atkin, from the Georgia Institute of Technology in the United States, said: “Mankind is at its worst in the battle-field. Can robots be more humane than human beings?”

There is unfortunately not much data on this.

The signatories of the open letter argue that these are the wrong questions: “The key question … today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap … to mass-produce. It will only be a matter of time before they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etcetera.”

While stopping a technological arms race may seem impossible, in an article on The Conversation website, Professor Toby Walsh, leader of the optimisation research group at National ICT Australia, said that it can and has been done before.

“A recent example is the United Nations Protocol on Blinding Laser Weapons, which came into force in 1998 … Of course, the technology for blinding lasers still exists; medical lasers that correct eyesight are an example of the very same technology. But because of this ban, no arms manufacturer sells blinding lasers. And we don’t have any victims of blinding lasers to care for.”

This is what the open letter signatories are hoping for: “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Disclaimer: Sarah Wild is a signatory of the open letter.