AI or Artificial Intelligence. Man chatting with a smart AI bot or artificial intelligence using an artificial intelligence chatbot developed by OpenAI.
Prompt this computer programme to describe itself and it says it is “just like talking to a knowledgeable friend”.
Released by San Francisco-based company OpenAI in November, ChatGPT mimics human conversation, instantly responding to anything you type into its chatbox. It can tell you a joke and compose song lyrics; it can draft your presentation notes and offer relationship advice. You think of “it” but it refers to itself in the first person — “I”.
ChatGPT works seamlessly with OpenAI’s other flagship product, Dall-E, which can make any pictures and illustrations that you describe with text prompts.
Other tech companies such as Google and Facebook have already released their own versions of this powerful technology that promises to revolutionise the way that we work and communicate.
But its creators believe it is capable of so much more than that. Branding their creations as “artificial intelligence”, they claim that it will eventually make machines smarter — much smarter — than humans.
“This technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility,” writes OpenAI’s chief executive, a 38-year-old American named Sam Altman, on the company’s website.
“Our mission is to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”
Climate cost, bias and deception
“It’s a god they are trying to build,” said Ethiopia-born computer scientist Timnit Gebru. Gebru, whose work on bias in internet algorithms saw her co-lead Google’s Artificial Intelligence ethics team, is not too preoccupied by the “intelligence” of such software.
Programmes such as ChatGPT are trained on vast amounts of human words and conversation, much of it scraped without consent or respect for copyright from the internet, a form of intellectual property theft.
When asked, the software can replicate the patterns and connections in those vast datasets, in a way that, to human eyes, can feel like “intelligence”.
But the software is not thinking. It is merely regurgitating the data on which it has been trained. Its answers are entirely dependent on the content of that data — although it also has a tendency to invent false answers, which its creators use the human experience of “hallucinations” to describe.
Gebru has dedicated much of her career to highlighting the immediate risks posed by this new software. She and several co-authors wrote a paper in 2020 — which cost her her job at Google — that outlined some of these, including:
- The extreme impact on the environmental — the energy used to train ChatGPT with a huge dataset of human language just once could power 12 000 Johannesburg homes for a month;
- The potential for inherent biases and discrimination — if the data is racist and sexist (as so much of the internet is) the outputs will be too; and
- The potential for such models to deceive users — because they are so good at sounding like us, they can easily fool humans — even when the content is inaccurate or invented.
An African AI
In a co-working space in Johannesburg, a different vision for artificial intelligence is being pioneered. Lelapa AI is not trying to create one programme to outsmart us all. Instead, it is creating focused programmes that use machine learning and other tools to target specific needs.
Its first major project, Vulavula, is designed to provide translation and transcription services for under-represented languages in South Africa. Instead of harvesting the web for other people’s data, Lelapa AI works with linguists and local people to collect information — and gives them a stake in future profits.
Two of the company’s founders, Jade Abbot and Pelonomi Moiloa, share Gebru’s fears — as do a striking number of women, and especially women of colour, in the AI field.
“These programmes are built by the West on data from the West, and represent their values and principles,” said Abbott, who notes that African perspectives and history are largely excluded from the datasets used by OpenAI and Google’s large language model or LLM.
That’s because they cannot easily be “scraped”. Much of African history is recorded orally, or was destroyed by colonisers, and African languages are simply not supported — speak to ChatGPT in Setswana or isiZulu and its responses will be largely nonsensical.
For Lelapa, this represents an opportunity. Because African data is so hard to find, OpenAI and Google will struggle to make its tools work effectively on the continent, leaving a gap in the market for a homegrown alternative.
“The fact that ChatGPT fails on our languages … this is the chance for us to build our own house, before they figure out how to exploit us,” said Moiloa.
The consequences of failing to build “our own house” are potentially severe, she said. “Data is the new gold. They will extract the data from us, create programmes and then sell those programmes back to us. And then all the profits flow out.”
Gebru agrees: “They’ve put the equivalent of an oil spill into the information ecosystem. Who gets to profit from it and who gets to deal with the waste? It’s the exact same pattern as imperialism.”
This article first appeared in The Continent, the pan-African weekly newspaper produced in partnership with the Mail & Guardian. It is designed to be read and shared on WhatsApp. Download your free copy here.
Subscribe to the M&G