/ 5 January 2001

Alas, mankind, we knew him …

There have always been people concerned about the threat from science. But now even some scientists are saying they’re scared. Andrew Smith ventures into Silicon Valley to meet the technoseers who think supercomputers have made man obsolete

A grey Monday morning, Jeremy Paxman on the radio. Woke up dreaming that I was lying on a beach in California: now I know that I’m not. For 45 minutes, the discussion is a genteel digest of various books Paxman’s guests have written. Then, abruptly, the mood darkens as the presenter introduces Professor Kevin Warwick of the Cybernetics Department at Reading University.

His tone is terse as he describes how Warwick had an operation to insert a small remote sensor into his arm, which was then linked to his computer, and how next year he plans to have another implant connected to his central nervous system, potentially enabling the machine to interact with his body, to send and receive instructions as a kind of surrogate brain.

Warwick’s fellow guests discuss some of the alarming issues raised by his work, but Paxman seems thrown. He can’t believe what he’s hearing. “Well,” he tells them at the end, “you’re all either credulous or I’m … I’m benighted somehow.” The toaster clangs off and I find myself smiling. A few months ago this stuff sounded like science fiction to me, too.

The Intel building looms inscrutable white in the perfect Silicon Valley sun, just off Highway 101 at Santa Clara, near San Jos. Gathered around it is a leafy campus, through which a few of the 6 000 staff stride in short sleeves and skirts. It is pleasant, but almost freakishly undramatic in presentation and appearance; like the suburban, Stepford-wife sprawl of the valley as a whole, in fact. It takes an effort to remember that decisions taken here will have a more profound effect on the shape of things to come than all but a few acts of government over the coming decades.

I’ve driven down from San Francisco because I want to know if and when computers will become capable of intelligent thought, and in all the world few people can be better equipped to tell me than Justin Rattner, the Intel director who heads its Microprocessor Research Laboratories. In a blue suit and matching tie, with wire specs, he is a jocular textbook image of the grown-up geek, and he self-evidently loves his job. In a featureless grey-and-white seminar room, we begin by discussing Moore’s Law, the projection that has computer processing speeds doubling every 18 months and that he expects to hold good for the next 10 years at least, adding with a chortle that “given the way of these things, that’s almost like saying we don’t see any end to it”.

Could the machines exhibit behaviour that we would recognise as intelligent? Could we build machines that were smarter than us? “There is no question that an extraordinary amount of computing power is required to do that. No one really knows how much. All of our past attempts have fallen well short. But you have to assume that, yes, the day is foreseeable when that would be possible.” Once thoughts such as these would have been little more than something to kick around down at the wine bar, but that changed in April when the United States Internet culture magazine Wired ran a long and detailed article called “Why the future doesn’t need us”, by the co-founder and chief scientist of Sun Microsystems, Bill Joy.

The sub-heading read: “Our most powerful 21st-century technologies robotics, genetic engineering and nanotech are threatening to make humans an endangered species.” Anyone who has spent much time on the Internet, or hanging around geeks, is familiar with such shrill prophecies, but not from a man like Joy, the billionaire co-author of the Java computer language, who recently co-chaired the Presidential Commission on the Future of IT Research and is reputed to be a profoundly sane, socially conscious man.

Wired editor Katrina Heron had chosen him to write the article for these very reasons, and it made an immediate and lasting impact in the US. Taken on its own, Joy’s thesis sounded a little alarmist, but plausible. Until recently, he observed, he had expected Moore’s Law to hold only until 2010 or so. Yet, thanks to the unexpectedly rapid progress in molecular electronics, it has become clear that “we should be able to meet or exceed the Moore’s Law rate of progress for another 30 years.”

By 2030, then, we may be able to mass-produce machines that are a million times more powerful than the personal computers of today. No one knows whether this could ever give rise to consciousness many of our more intractable existential questions will be answered at this point but it could certainly allow machines to process information and make decisions with a rapidity and efficiency way beyond our own capabilities. In combination with rapid advances in genetic engineering and nanotechnology, Joy concluded that “enormous transformative power is being unleashed … these advances open up the possibility to completely redesign the world, for better or worse”.

The tiny nanotechnological machines are already being built, but are not yet amenable to remote control. That breakthrough is now expected within 30 years. Of necessity, these devices would be under the direction of computers, which almost anyone could own. They would also, by definition, have the potential for self-replication and thus some form of independence.

In Engines of Creation, Eric Drexler’s seminal work on the nanotech revolution, the author detailed the many ways in which it would improve our lives, but he also identified what has come to be known as the “grey goo problem” millions of microscopic “assemblers” running amok, either by accident or design, with enormous and unstoppable destructive force. And, in contrast to industrial, 20th-century technologies, the final, enabling steps in these new ones will not necessarily be the most problematic and ponderous.

“The breakthrough to wild self-replication in robotics, genetic engineering or nanotechnology could come suddenly, reprising the surprise we felt when we learned of the cloning of a mammal,” claimed Joy. He invoked the atomic bomb and the antibiotic-resistant superbugs that haunt modern medicine to remind us that our husbandry of new science has been imperfect before.

He might have added BSE and, according to some theories, HIV. The threat being described still felt rather abstract and unreal. Then I went off and did some research of my own, at which point it became clear that Joy was not the only techno boffin serving notice of big and impending change. There was Professor Rodney Brooks, director of the Artificial Intelligence Laboratory at Massachusetts Institute of Technology (MIT) offering the thought that, a few generations hence, “I don’t think we’re going to be the same species any more.”

There was Freeman Dyson, a distinguished contributor to the Standard Theory of Quantum Electrodynamics, declaring that “humanity looks to me like a magnificent beginning, but not the final word”; the equally distinguished inventor Ray Kurzweil giving us “a 50% chance of survival”, before adding, archly. “But then, I’ve always been accused of being an optimist”; and Hans Moravec, director of the Robotics Institute at Carnegie Mellon University in Pennsylvania casually observing that “the robots will eventually succeed us: humans clearly face extinction”. And there is the question of who is going to have access to these powerful forces.

Christine Peterson, president of the pro-new technology Foresight Institute just up the road from Intel at Palo Alto, had bluntly told me that “any military establishment looks at this stuff and says, ‘We need this now.’ There is a race going on, no question. There is also a commercial race.” So, wanting to know whether there were strong links between the private capitalists of Silicon Valley and the military establishment, I contacted Michael Geyer, a professor of Contemporary History at the University of Chicago, who has a particular interest in this area and has written on “infowar” in the past. The simple answer, he said, is that we can’t be sure.

“Every large-scale 20th-century technology had a strong military component,” he began. “There has been a general view that this is not true of the electronic revolution, but it’s still an open question. What we can say is that there is a huge discussion going on about a complete remaking of the military body into a primarily electronic force by 2020. Billions of dollars have been spent on this, but it remains unclear how this relates to the multibillions of dollars flowing out of venture capital. We know the military is there, though the wired community in Silicon Valley, which is mostly suspicious of the State, has yet to acknowledge it.

Precisely how much crossover there is remains to be seen. “In any case, the new technology changes the security agenda out of all recognition. Hardware is no longer the primary issue. For the first time knowledge and ingenuity can be very destructive weapons. And you never know who’s got those.” The things Geyer said reminded me of Justin Rattner’s commendation of Intel’s latest “star wars” technology, and also of a conversation I’d had earlier with John Leslie, professor of Philosophy at Guelph University in Canada.

A few years ago Leslie published a book rejoicing in the title The End of the World: The Science and Ethics of Human Extinction, which I haven’t been able to mention with a straight face up to now. In it he maps the many possible routes to our destruction as a species, eventually rating our chances of extinction at a sporting 30%. He sees global warming and biological warfare as more immediate causes for concern than superintelligent computers and/or grey goo.

In any case, the danger from these areas may not be quite what it seems. “Two possible scenarios present themselves here,” he says. “The first is that the machines take over against our wishes. That seems to me less likely than that they take over with our tacit or explicit blessing. My own view is that, if it were all true, and they were conscious, then fine but if, as is likely, they weren’t conscious in the full sense, then that would be a disaster.”

The computers taking over with our blessing? It sound preposterous. Until you speak to Professor Hans Moravec of MIT, who will be waving a white hanky and cheering them on. When I catch him, he’s been writing continuously for 24 hours. He sounds tired at first, but perks up at the prospect of having a pop at Joy. “I mean, the thing that was notable about Bill Joy,” he offers cheerily, “was that he actually accepted the possibility of these things. That was the big transition for him. And then he sort of … panicked ha ha ha!”

Can Moravec understand the rest of the world’s dismay at his own lack of panic? “I have no reason to panic. I’ve accepted the possibility of intelligent, mobile machines for 40 years. I see it as a major transition in the nature of human life. For me, robots are extensions of humanity. Something I’m always pointing out is that we’re 99,9% cultural beings now: the information that’s passed from gene-ration to generation in our genome consists of a few billion bits, but there are trillions of bits in our libraries. And we would not be who we are if it wasn’t for this cultural information.

The robots are simply the point when that cultural information takes over from strict biology. Eventually they might develop in their own directions, but that’s the normal situation for descendants. They have their own lives.” Quite apart from the presumed inevitability of the machine’s dominance, Moravec sounds as though he’s looking forward to replacement.

“Well, I think of it as the most mature and potentially full future that we could have. Everything else seems horrible, stagnant, limiting; probably fatal. If we were to stay just in our biological form, sooner or later some biological cataclysm would come along that we couldn’t deal with. By combining with machines, going into this form, we at least have a chance. Intelligence now is a rare and unusual thing in the world. Only human beings have it, and they’re actually quite … er … what’s the word …?” Dumb? “Ha ha ha. Yeah, since there’s no competition, we’ve never noticed how bad we are at what we do. And computers have already exploited that.

They can already do the job of 10 000 clerks, which shows how badly clerks are fitted for the job they do. But we are suited for seeing and moving and so on. Machines can’t do that yet as well as we do.” Will he venture a timescale for these prophecies? “I have four stages of robots that resemble reptiles, mammals, primates and humans in terms of what they can do and I expect them to be about a decade apart. And we’re not yet at the reptile stage. Things we can produce this decade will be like very small vertebrates in their complexity like fish. In fact, we’re only five years away from commercialisation, with the first robots that will be able to clean floors or guard an area without requiring specialist installation. Machines that can learn in any meaningful sense are still a decade off, though.”

Does he have no qualms about working on things that, in his own estimation, could one day subjugate or exterminate his own species? “Well, yeah, but I’ve decided that that’s inevitable and that it’s no different from your children deciding that they don’t need you. So I think that we should just gracefully … bow out ha ha ha. But I think we can have a pretty stable, self-policing system that supports us, though there would probably be some machines that were outside the system, which became wild. I think we can co-exist comfortably and live in some style for a while at least.”

And I find myself thinking that Professor Moravec should get out more. Yet, at the same time, two projects currently being developed in university labs imply that his pronouncements are more than just self-serving hyperbole. First, two researchers at Brandeis University in Massachusetts have created a computerised system that automatically creates, evolves, improves and finally builds a variety of simple mobile creatures without significant human intervention.

Then there is the aforementioned Professor Warwick at Reading, who hopes that his next implant will allow his computer to record motor signals as they pass through his central nervous system, then recreate the corresponding movement by replaying that recording. There is the additional and infinitely more intriguing possibility that this could work for subjective states involving emotions, memories and pain. Like Moravec, he expects computer intelligence to eclipse human intelligence some time in the next 20 to 30 years, but in contrast to the American, he professes to be worried by the prospect. He sees a possible solution in fact, the only palatable solution in augmenting our given, biological selves with computer technology. In joining them, as it were.

“If anything, Moore’s Law is speeding up, with processing speeds doubling every year or so now,” he explains. “We know that machines will have phenomenal memory and speed of processing, so I say, ‘Why can’t I have a bit of that?’ Machines aren’t limited to three dimensions the way we are. I’d love to be able to think in 20 dimensions.” As significant as any potential improvement on our flesh and blood inheritance, however, is the sensation Warwick got from being linked to his computer during the first experiment. “One of the reactions I had to having the implant was a feeling of affinity with my computer. Once that becomes a permanent state, you’re not really a human any more, you’re a cyborg. Your values and ethics would be bound to change, I think, and you would view unaugmented humans a little differently.”

Consider the attitude we have to cows, he adds. Once we were the same, but whereas we might like them now, we wouldn’t necessarily elect one as prime minister (an assertion that, in the interest of decorum, we will allow to pass without further comment). “It’s a dangerous game,” he concedes, “because we’re working with things that could change the world, and you can never be quite sure where that might lead. But that’s the story of humanity, isn’t it?”

Unsurprisingly, this set of circumstances supports a healthy fringe of cultish pseudo-philosophies, with names like Transhumanism and Extropianism, which welcome the prospect of biological and neurological augmentation. They also await the coming of “singularity”, or the “techno rapture”, which is expected to happen when the super-intelligent computers we create acquire the capacity to design their successors, at which stage, it is believed, the cycles of Moore’s Law will move from human to machine timescales. Given that computers don’t eat or sleep and will have the capacity to think progressively more quickly than us, the cycles of invention, while remaining constant in relative terms, would get shorter and shorter in our subjective human terms, until finally they were racing into infinity, leaving us trailing in their wake, as a no doubt rather grumpy and confused rump of “posthumanity”.

Personally, I love this idea. It’d be like living the end of 2001: A Space Odyssey. But it’s obviously bunk. Right? Ian Pearson is one of British Telecom’s (BT)”futurologists” whom the company hires out to other organisations on a consultancy basis. He tells me that BT Labs has been saying everything you’ve just read for years and that, while it is easy to dismiss someone like Kevin Warwick as an attention-seeking nutter, what he says is valid: namely, “that if you start building machines that have the potential to be smarter than people, you’re playing a very, very dangerous game”.

Going back to something that Paxman said on the radio, though couldn’t we reassert our corporeal authority by simply switching the buggers off? “Perhaps,” he says. “But imagine machines which make use of solar energy and/or are linked to a grid, with battery backup. Ten years after they’ve reached human equivalence, we can expect the computers to be millions of times smarter than us, with an infinitely more subtle grasp of physics and responses which, in the context of our slow biological brains, would seem instantaneous. If they were hostile, it would be like us fighting chimps.

‘A few years ago, we realised that we weren’t far away from having a global telephone network with as many connections as the human brain. We’ve passed that now. BT tried to explore whether there was any danger of consciousness occurring, but had to give up, because we don’t know what makes a thing conscious. Myself, I think we’ll muddle through. The biggest fear for me is a big fundamentalist, anti-IT backlash. At some point, governments are going to have to wake up to these possibilities and start planning. I hope they do.”

Do we believe any of this? Soft and inadequate-brained as I am, serious concern seemed easier in Silicon Valley, where virtually everyone you meet works in computer-related industries, talks about little else and can easily have you wondering whether the cyborgs aren’t already among us. Even there, scepticism is easy to find, though. Confronted with his lead role in our future extinction, Intel’s Rattner took a sanguine line. “Well, you know,” he chuckled, “those of us who were around in the 1950s remember all the visions of what life was going to be like in the year 2000, with flying cars and tourist rockets to the moon. Technology just doesn’t advance in this nice, linear fashion. And society represents tremendous back pressure on these things.”

So he thought Bill Joy was fretting over nothing? “I think … that he must have seen The Matrix too many times.” And with that he was up and off. But was that a bolt I glimpsed in his neck?