/ 14 November 2014

Dawn of the planet of machines

Logic: In the movie I
Logic: In the movie I

The day is coming, we are told, when the world as we know it ends. Somewhere in a laboratory, possibly Silicon Valley in the United States or more likely a rogue research group in China or North Korea, an engineer or a programmer will create a machine that is clever enough to make machines that are smarter than itself.

The technological singularity will be upon us, and there is no turning back. With each successive generation, these hyper-intelligent machines will become smarter, ­outstripping their increasingly redundant creators.

But how likely is this?

Recent months have seen a barrage of comments, from the likes of Tesla and SpaceX founder Elon Musk and Nobel-prize-winning physicist Stephen Hawking, warning against the dangers of artificial intelligence (AI).

“This is quite possibly the most important and most daunting challenge humanity has ever faced,” writes Nick Bostrom, Oxford University philosopher and author of Superintelligence. “If some day we build machine brains that ­surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence.”

The warnings of people ­working in the field are more commonplace, but equally concerning, albeit without the possibility of an action-packed blockbuster to depict mankind’s imminent demise. The more realistic question is what will happen when the “haves” – whether they are people or countries – develop the AI technologies that will make the “have-nots” entirely disposable.

AI is a broad and sprawling field: from statistical modelling and heavy-duty data crunching through to IBM’s Watson, a computer system that can answer questions in natural language.

“Thinking like humans is just one part of AI,” says Katherine Malan, a computer scientist at the University of Pretoria. “Most of us aren’t trying to do that. We’re ­trying to get computers to solve difficult problems that humans can’t solve, or that are simple for humans but we require machines to do it.”

Trying to make machines think and learn like humans is called cognitive computing.

In 1955, American computer ­scientist John McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines”, but there are questions about this definition and they centre on the word “intelligence”.

Humans are very good at making decisions, even though the data may be ambiguous; we are much better than machines. For example, a human baby can recognise a face or an object, remember it and know that it is distinct from the objects around it. Meanwhile, millions of dollars are being spent to make computers do something that a child learns before it can walk.

On the other hand, a machine can do multiple complex mathematical computations simultaneously, in under a second, something which is far beyond the reach of any human. So what “intelligent” is for a machine and what “intelligent” is for a human are two separate things.

But there is also a difference between a machine that can have more information than a human, and a machine that can learn. Deep Blue – IBM’s chess-playing supercomputer which is capable of processing more than 200-million chess moves a second – beat former world chess champion Gary Kasparov twice (although on the second occasion in 1997, Kasparov claimed that IBM had cheated and demanded a rematch). Deep Blue, like many other artificial intelligence systems, uses a database of human knowledge, and then algorithms to optimise its moves.

Blondie24, on the other hand, is a checkers-playing computer developed by American David Fogel. This programme taught itself how to win at checkers – by playing against itself and seeing what worked and what didn’t, rather than relying on human expertise.

Is there a future in which this checkers-playing machine morphs into the evil overlord of science fiction?

“Machines are already able to process vast amounts of information very quickly,” says Tommie Meyer, director of the Centre for Artificial Intelligence Research, jointly funded by the University of KwaZulu-Natal and the Council for Industrial and Scientific Research’s Meraka Institute. “But [this processing is] at a level that is relatively superficial. Adding the ability to process all this information in a truly intelligent manner will result in machines that will be able to assist humans in making complex decisions.”

Besides, overlord status requires sentient intent. University of Cape Town philosopher Jacques Rosseau says that malice against other creatures requires “not only intelligence, but also sentience and, more specifically, the ability to perceive pains and pleasures. Even the most intelligent AI might not be a person in the sentient and having those feelings which … [makes] it vanishingly unlikely that it would perceive us as a threat, seeing as it would not perceive itself to be something under threat from us”.

The recurring message from the experts the Mail & Guardian spoke to is that computers are tools, increasingly “intelligent” ones. Watson, for example, has been deployed in the healthcare sector, with more information in its cloud-based “brain” than any human could hope for.

Dr Solomon Assefa, IBM Research Africa’s director for research strategy, says that the company’s drive is not actually about AI, but “IA, intelligence augmentation”. “How do we enhance our cognition?” he asks. “It’s not about machines copying what we do and having brains more powerful and better than ours.”

The idea is that machines and robots help humans become better at what they do, Assefa says. History is littered with examples of machines helping humans improve their capabilities, from communications to flight. “That’s what we want to come up with when making the next [AI] technology, [a computer that can] solve logistical and optimisation questions. Computers provide us with objectivity, so we can make good decisions.”

Perhaps the greatest concern for humans – under the guise of an apocalyptic hostile machine takeover – is ego, and the refusal to believe that a computer can perform activities, that were previously the dominion of humans, better than us. In a world run on far-reaching AI systems, “such a world could be far superior than the one we currently live in”, says Rosseau. “We make many mistakes – in healthcare, certainly when driving – and it’s simply ego that typically stands in the way of handling these tasks over to more reliable agents.”

For Johan van Niekerk, an associate professor at Nelson Mandela Metropolitan University’s school of information and communication technology: “AI provides us with some powerful tools, [but] I don’t believe that humanity is ready for them.” The true danger, according to Van Niekerk, “lies in how much the gap between the employed and the unemployed will widen”. He cites the example of Google’s driverless cars. In December last year, four states in the US had passed legislation to allow driverless cars on their roads.

“Self-driving cars is a wonderful technology that can really improve road safety, but what will we do with all those people who make a living driving cars and trucks?”

The world has seen this before: as technologies replace humans, people lose their jobs. Unrest foments as metal succeeds flesh and bone.

“Until we rethink our socio-economic models, the increasing use of AI is simply contributing to the unemployment problems societies face today,” Van Niekerk says. “We cannot continue to reduce the need for humans without addressing the problem of how those humans are supposed to make a living.

“A fundamental assumption [in the robots annihilate humans scenario] is that humans are not going to evolve, develop new skills that develop new opportunities. There will probably be a completely new job market that is suitable for humans. What’s the new set of economies going to look like?”

“We have been good at evolving over time,” he says, citing the industrial and agricultural revolutions. “We cannot ignore our ability to adapt and change.”