/ 17 August 2025

Never mind the botlickers, ‘AI’ is just normal technology

Aiwikimediacommons(1)(1)
Demystify: Artificial intelligence has its uses, but it is the harms that should concern us. Photo: Flickr

Most of us know at least one slopper. They’re the people who use ChatGPT to reply to Tinder matches, choose items from the restaurant menu and write creepily generic replies to office emails. Then there’s the undergraduate slopfest that’s wreaking havoc at universities, to say nothing of the barrage of suspiciously em-dash-laden papers polluting the inboxes of academic journal editors.

Not content to merely participate in the ongoing game of slop roulette, the botlicker is a more proactive creature who is usually to be found confidently holding forth like some subpar regional TED Talk speaker about how “this changes everything”. Confidence notwithstanding, in most cases Synergy Greg from marketing and his fellow botlickers are dangerously ignorant about their subject matter — contemporary machine learning technologies — and are thus prone to cycling rapidly between awe and terror.

Indeed, for the botlicker, who possibly also has strong views on crypto, “AI” is simultaneously the worst and the best thing we’ve ever invented. It’s destroying the labour market and threatening us all with techno-fascism, but it’s also delivering us to a fully automated leisure society free of what David Graeber once rightly called “bullshit jobs”.

You’ll notice that I’m using scare quotes around the term “AI”. That’s because, as computational linguist Emily Bender and former Google research scientist Alex Hanna argue in their excellent recent book, The AI Con, there is nothing inherently intelligent about these technologies, which they describe with the more accurate term “synthetic text extrusion machines”. The acronym STEM is already taken, alas, but there’s another equally apt acronym we can use: Salami, or systematic approaches to learning algorithms and machine inferences.

The image of machine learning as a ground-up pile of random bits and pieces that is later squashed into a sausage-shaped receptacle to be consumed by people who haven’t read the health warnings is probably vastly more apposite than the notion that doing some clever — and highly computationally and ecologically expensive — maths on some big datasets somehow constitutes “intelligence”.

That said, perhaps we shouldn’t be so hard on those who, when confronted with the misleading vividness of ChatGPT and Co’s language-shaped outputs, resort to imputing all sorts of cognitive properties to what Bender and Hanna, also the hosts of the essential Mystery AI Hype Theater 3000 podcast, cheekily described as “mathy maths”. After all, as sci-fi author Arthur C Clarke reminded us, “any sufficiently advanced technology is indistinguishable from magic”, and in our disenchanted age some of us are desperate for a little more magic in our lives.

Slopholm Syndrome notwithstanding, if we are to have useful conversations about machine learning then it’s crucial that instead of succumbing to the cheap parlour tricks of Silicon Valley marketing strategies — which are, tellingly, constructed around the exact-same mix of infinite promise and terrifying existential risk their pro-bono shills the botlickers always invoke — we pay attention to the men behind the curtain and expose “AI” for what it is: normal technology.

This, of course, means steering away both from hyperbolic claims about the imminent emergence of “AGI” (artificial general intelligence) that will solve all of humanity’s most pressing problems as well as from the crude Terminator-style dystopian sci-fi scenarios that populate the fever dreams of the irrational rationalists (beware, traveller, for this way lie Roko’s Basilisk and the Zizians).

More fundamentally, it also means taking a step back to examine some of the underlying social drivers that have caused such widespread apophenia (a kind of hallucination where you see patterns that aren’t there — it’s not just the “AI” that hallucinates, it’s causing us to see things too).

Most obviously in this regard, when confronted with the seemingly intractable and compounded social and ecological crises of the current moment, deferring to techno-solutionism is a reasonable strategy to ward off the inevitable existential dread of life in the Anthropocene. For many people, things are as the philosopher of technology Heidegger once said at the end of his late-life interview: “Only a God can save us.” Albeit in this case a bizarre techno-theological object built from maths, server farms full of expensive graphics cards and other people’s dubiously obtained data.

Beyond this, we should acknowledge that the increasing social, political, technological and ethical complexity of the world can leave us all scrambling for ways to stabilise our meaning-making practices. As the rug of certainty is pulled from under our feet at an ever-accelerating pace, it’s no wonder that we tend to experience an increased need for some sense of certainty, whether grounded in fascist demagoguery, phobic responses to the leakiness and fluidity of socially constructed categories or the synthetic dulcet tones of chatbots that have, here in the Eremocene (the Age of Loneliness), become our friends, partners, therapists and infallible tomes of wisdom.

From the Pythia who served as the Oracles of Delphi, allaying the fears of ancient Greeks during times of unrest, to the Python code that allows us to interface with our new oracles, the desire for existential certainty is far from new. In a time where a sense of agency and sufficient grasp on the world has been wrested from most of us, however — where our feeds are a never-ending barrage of wars, genocides and ecological collapses we feel powerless to stop — the desire for some source of stable knowledge, some all-knowing benevolent force that grants us a sense of vicarious power if we can learn to master it (just prompt better) — has possibly never been stronger.

ChatGPT, Grok, Claude, Gemini and Co, however, are not oracles. They are mathematically sophisticated games played with giant statistical databases. Recall in this regard that very few people assume any kind of intelligence, reasoning or sensory experience when using Midjourney and other early image generators that are built using the same contemporary machine learning paradigm as LLMs. We know they are just clever code.

But if we don’t regard Midjourney as some kind of sentient algorithmic overlord simply because it produces outputs that cluster pixels together in interesting ways, why would we regard LLMs as more than maths and datasets just because they produce outputs that cluster syntax together in interesting ways? Just as a picture of a bird cannot fly no matter how realistically it is drawn, so too is a picture of the language-using faculties of human beings not language and thus not reflective of anything deeper than next token prediction, hence Bender and Hanna’s delightful term “language-shaped”.

In light of the above, I’d like to suggest that we approach these novel technologies from at least two angles. On the one hand, it’s urgent that we demystify them. The more we succumb to a contemporary narcissism-fuelled variation of th Barnum effect, the less we’ll be able to reach informed decisions about regulating “AI” and the more we’ll be stochastically parroting the good-cop, bad-cop variants of Silicon Valley boosterism to further line the pockets of the billionaire tech oligarchs riding the current speculative bubble while they bankroll neofascism.

On the other hand, we should start paying less attention to the TESCREALists (trans­humanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism — you know the type) and their “AI” shock doctrine and focus more on the current real-world harms being caused by the zealous adoption of commercial “AI” products by every middle manager, university bureaucrat or confused citizen who doesn’t want to be left behind (which if left unchecked tends to lead to what critical “AI” theorist Dan MacQuillan terms algorithmic Thatcherism).

These two tasks need to be approached together. It’s no use trying to mitigate actual ethical harms — the violence caused by algorithmic bias, for instance — if we do not have at least a rudimentary grasp of what synthetic text extrusion machines do, and vice-versa. In approaching these tasks, we should also challenge the rhetoric of inevitability. No technology, whether laser discs, blockchain, VR or LLMs, necessarily ends up being adopted by society in the form intended by its most enthusiastic proselytes and the history of technology is also a history of failures and resistance.

Finally, and perhaps most importantly, we should take great care not to fall into the trap of believing that critical thought, whether at universities, in the workplace or in the halls of power, is something that can or should be algorithmically optimised. Despite the increasing neoliberalisation of these sectors, which itself encourages the logic of automation and quantifiable outputs, critical thought — real, difficult thought grounded in uncertainty, finitude and everything else that makes us human — has perhaps never been so important.