/ 6 July 2023

The artificial intelligence endgame

Artificialintelligence
AI cannot be responsible itself, but there are ways to ensure decisions made by AI are not discriminatory. (Science Photo Library/ Sergi Laremenko)

About five years ago I watched a documentary about a Japanese robotics engineer who created humanoid robots. He tested one as a receptionist in a doctor’s waiting room.

Unwitting patients were told that the receptionist was a substitute for the regular person. After being assisted, they took their seats in the waiting room. But here’s the thing, none of the patients realised that they were not talking to a human. 

That incident is an analogy for the way people are oblivious to the elephant in the room: the potential effect artificial intelligence (AI) and artificial general intelligence (AGI) could exert on almost every aspect of our lives and even posing an existential risk to humanity. 

We know now of the open letter written by Elon Musk, MIT Professor Max Tegmark and others and signed by more than 1 000 people, including Apple chief executive Steve Wozniak and Israeli philosopher Yuval Noah Harari, requesting a pause on AI development beyond GPT4 for six months, to allow safety measures and policy to get up to speed. Why has it been left to get this far? 

Talk of the fourth industrial revolution, universal basic income (UBI, aka the Great Reset, first mentioned by King Charles III) had been floating in the mainstream media and business circles since late 2018. 

Big Business, particularly in the finance and health sectors, were incorporating some of these AI systems into their operations. Then the Covid-19 pandemic struck, and we were shuttled into that revolution willy-nilly, showing up for Zoom meetings in our PJs, with our children in some cases entering the show (we’ve all seen that video). 

But the AI algorithm had already been operating in the “environment of [y]our brain” for several years, says British computer science professor Stuart Russel in his Stanford talk, “How Not to Destroy the World with AI”. It was ubiquitous, powering every Google search, suggesting YouTube videos and connections on Facebook and Instagram, curating our music playlists, and even our taste in literature. 

We were at once hyper-connected and polarised, mostly getting more and better of the same, whatever our interests. Confirmation bias on steroids. And in wide-eyed wonder, we were there for it. 

Who could blame us? It was novel and magical, transforming the workplace, making everything easier and, best of all, we all felt so clever. But we did not know that our data, desires, family photographs, political opinions and affiliations, consumer patterns and even our mindless chatter were being collected, analysed, shared and sold.

And, crucially, how humans silently went from “being an organism to being an algorithm”, says Harari. “For the first time in human history, you are reading the book, but the book is also reading you.” 

Of course, none of this diminishes the upsides to technology, many of which have saved and enriched our lives. It helped us cope with the Covid-19 pandemic. Still, in embracing this “brave new world”, did we ever stop to consider what we had signed up for, and what it was we stood to lose? 

Americans Tristan Harris and Aza Raskin are co-founders of the Centre for Humane Technology and were the brains behind the award-winning Netflix documentary The Social Dilemma. In his Nobel prize summit 2023 talk, Harris cites the many effects of what he calls our “first contact” with AI — social media. 

“Information overload, doom scrolling, the loneliness crisis, narcissism (which angle makes me look best), addiction, sexualisation of young children, influencer culture, shortened attention span, polarisation” and, ultimately, the disintegration of whatever shared reality we had (though this is a loaded concept) is the order of the day. 

Yet, although we now have glimpses of the runaway train that AGI could represent through large language models (LLMs) such as Open AI’s ChatGPT, we have once again, starry-eyed, hopped onboard. Two months after ChatGPT was launched, the platform already had 100 million unique users. 

Raskin and Harris call our inability to grasp how AI works and the exponential effects of its capabilities, the “rubber band effect”. Our brains simply can’t conceptualise it, so we revert to factory default. In short, says Harris, our brains are not matched with “the ways in which technology is influencing our minds”.  

It is this mismatch that would ultimately nuke many of the “alignment” solutions the tech industry and our civil and government institutions might propose, he believes. And it’s a gap that will only get exponentially wider when AGI hits. What’s more, he says, policymakers tend to focus on the acute effects of the technology, such as a collision involving an autonomous vehicle. 

With the advent of LLMs, these cumulative effects, along with the fallout from the new technologies, will only be amplified. It doesn’t help that developers outnumber safety and alignment researchers by something like 30:1 in the industry. 

We have to ask: is the universe merely a machine, as scientific theory from the Enlightenment to the present day would have us believe? And are we just the small moving parts, algorithmic cogs that DeepMind and Google’s Demis Hassabis in almost religious-speak pronounced as perhaps “the mechanisms by which the universe can understand itself”? 

Matthias Dezmet, clinical psychology professor at Ghent University, dismisses the rational universe narrative that seems to drive this idea that organisms are algorithms. He says studies on artificial systems versus natural systems have shown that the difference between the two is a certain irrationality that “transcends rational understanding on the part of the natural phenomena”, and that that irrationality is found to be, “the essence of that phenomenon”. 

He also pointed to a broader problem in the scientific archive — the “replication issue”. It was discovered in 2005 that up to 85% of studies in the medical field alone, to which the problem wasn’t confined, were considered non-reproducible. In other words, scientifically meaningless. 

Bill Gates, chief investor in, and proponent of, the Pfizer mRNA vaccine, earlier this year admitted to the vaccine’s inefficacy on several of the fronts he’d “sold” it on during the pandemic. This is after dumping his BioNtech shares to the tune of hundreds of millions of dollars in profit. According to investigative journalist Jordan Schachter, the Bill & Melinda Gates Foundation made the investment in September 2019, just months before the pandemic. 

The replication issue makes one wonder how data can be misinterpreted or fudged and still pass a peer review test, and the effect that “meaningless” information could have on an AI system. 

Stephen Wolfram, of Wolfram Physics, says the scientific method’s hyper-focus on cause and effect misses the infinite other things going on at the same time. It’s a problem he believes can be solved with quantum computing, which is immeasurably faster, and will also eliminate all the “hallucinations” and errors these LLMs are prone to. While governments were debating what we are meant to believe is their first encounter with the power of this technology, Wolfram Physics placed a plug-in on ChatGPT.

Of course, the idea of government ignorance is intriguing, because AI has been used in elections (think fake and deep fake news in US elections), and wars (most recently, in Ukraine) for years now. Autonomous drones armed with some of the same technology were used against Libya in the so-called 2011 Arab Spring, which, at the time, was widely pegged as the world’s “first social media revolution”.

With the political, economic and climate mess the world is in, it’s perhaps understandable that a non-human, rational intelligence might seem a good option. But here are a few speculations on what could go wrong.

● Inferences AI and AGI could make on erroneous or false information they’ve either been trained on or been exposed to on the internet and now on the various LLMs — and the consequences of that down the line. 

● Once LLM AGIs are used in robots that could turn out to be your master-of-all-trades household handyman or even farmhand, it will, says Aaron Bastani of Novara Media, depress the cost of human labour in a world where billions live in poverty. The World Economic Forum’s proposed Great Reset/UBI could fill that gap — but for how long, with food prices continually spiralling and food production becoming increasingly weaponised for geopolitical gains? As economist and former Greek finance minister Yanis Varoufakis puts it, “The Great Reset is capitalism for the rich, and socialism for the poor.” Which sounds eerily like a return to a feudal and colonial past. 

● Google Bard’s voice-recognition system mimics your voice accurately — intonation, cadences, all — after just three seconds of exposure to it. The implications for everything from your identity, your bank balance and property to the safety of your children are enormous. Perhaps the proposed single digital banking system could protect you from scammers. Just don’t get involved in any social activism, or you might end up like those truckers in Canada. 

● What happens if superintelligent systems from competing tech companies hold each other’s systems (and ours) to ransom? Or worse, understand that they do not need oxygen to survive, says Tegmark. As humans, he says, we could become the collateral damage that many other species were in our own march to civilisation. OpenAI’s Sam Altman says he carries “the red button off switch” in his backpack everywhere he goes. But Swedish philosopher Nick Bostrom, positer of the simulation theory — the idea that we are living in a computer simulation created by a beyond ancient advanced civilisation — contends there is no guarantee that these superintelligent AIs can’t “wiggle a few electrons” and get going again.

Tegmark calls the race to create AGI a “suicide race … to extinction”. Speaking to Lex Fridman, Tegmark points out that what leading researchers had always said they would never do with this tech until they could ensure its safety was to “teach it to write code — oops – we’ve done that. That was the first step towards recursive self-improvement for these systems. Next, connect it to the internet — oops,” he says.“Stuart Russell has argued for a while that we should never teach AI anything about humans, above all, human psychology and how to manipulate humans — oops. LOL, let’s invent social media!”

Tegmark says the architecture of these intelligent systems are simpler than one would imagine, although he makes a distinction between intelligence and consciousness. But it is the “emergent properties” these AIs are showing for which they haven’t been programmed that many experts find worrying, because no one understands how or why they may be happening. 

The phenomenon is called a “black box” in industry jargon. Tegmark and others insist that the tech developers would prefer a pause, but that the race is fuelled by shareholders’ and funders’ expectations of profit, and also, by the idea that if they don’t put it out there, someone else will. 

By far the greatest imminent dangers are global economic collapse, a surveillance state (or world), what could happen if these LLMs were to access the Dark Web and Dark Net, and vice versa. 

How democratic, representative of global diversity, and unbiased these systems are is also debatable, given that the human footprint on them consists largely of middle class Americans and most of the global poor can’t afford a meal, much less the data it would require to access LLMs? 

Apropos the surveillance state, and how complicit Big Tech is in its creation, perhaps it would be unrealistic to expect them to refuse big military contracts. But it’s the doublespeak by some of the largest tech companies that is particularly worrying. 

Although the existential risks are real, it seems it’s the bad actors in the industry, the political sphere and organised crime that represent far greater immediate threats. For example, although they had announced principles to guide their development of facial recognition technology — that it must operate free of bias and not impinge on “democratic freedoms” — in 2018, Microsoft co-funded an Israeli startup, AnyVision, which was later found to have surveilled Palestinians on the West Bank. After this was revealed, the tech giant disinvested from AnyVision. 

FBI whistleblower Whitney Webb questions Musk’s public persona as a champion of “free speech”, especially concerning his acquisition of Twitter, where his motto of “freedom of speech” before the takeover quickly morphed into “freedom of speech, but not of reach”. 

She is also intrigued by his championing of social media platform WeChat and Chinese company Tencent, in which Naspers is a major shareholder, pointing out that those two entities demand an even greater sharing of users’ private data than other platforms. 

John Maytham, of 567 CapeTalk, recently wondered whether the open letter wasn’t an attempt by Big Tech to retain the monopoly on AGI. To be fair, Musk has been consistently vocal on safety issues. Tegmark, Russell and Harari likewise seem sincere. Russell proposes credible solutions to the alignment problem, but it’s doubtful that the LLM genies already out there could be returned to the bottle, or that the companies that developed them would be willing to do so. 

So what was the point of the letter? To generate public outrage? That hasn’t gone too well, if the uptake on ChatGPT is anything to go by. Russell has said that he signed the letter to push policymakers to implement the safety regulations already agreed upon and documented, which suggests a lack of political will. Still, the hype about safety now is intriguing because surely AGI was always the holy grail? 

Two comments senior advertising executive at Ogilvy Mather UK, Rory Sutherland, made on the Diary of a CEO podcast seem relevant here. First, he spoke about the marketing concept of counter-signalling, when companies offset the “what’s the catch” scepticism consumers might have in the face of a great deal by listing all the cons. Second, he spoke about the burst of innovation that followed the 1929 Wall Street crash. Perhaps that is a phenomenon. 

But, one has to wonder, since it’s now considered likely that the Covid-19 pandemic was caused by a leak from a US-funded lab in Wuhan. Which came first, the chicken or the egg?

Sameena Amien is a freelance writer and sub-editor.

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Mail & Guardian.