Get more Mail & Guardian
Subscribe or Login

Artificial intelligence presents a moral dilemma

Since the outbreak of the pandemic, the world has grown increasingly reliant on artificial intelligence (AI) technologies. Thousands of new innovations — from contact-tracing apps to the drones delivering medical equipment — sprang up to help us meet the challenges of Covid-19 and life under lockdown. 

The unprecedented speed with which a vaccine for Covid-19 was discovered can partly be attributed to the use of AI algorithms which rapidly crunched the data from thousands of clinical trials, allowing researchers around the world to compare notes in real time. 

As Satya Nadella, the chief executive of Microsoft observed, in just two months, the world witnessed a rate of digital transition we’d usually only see in two years. 

In 2017, PWC published a study showing that adoption of AI technologies could increase global GDP by 14% by 2030. In addition to creating jobs and boosting economies, AI technologies have the potential to drive sustainable development and even out inequalities, democratising access to healthcare and education, mitigating the effects of climate change and making food production and distribution more efficient. 

But, unfortunately, the potential of “AI for good” is not currently being realised. As research published by the International Monetary Fund last year shows, today, AI technologies are more likely to exacerbate existing global inequalities than to address them. Or, in the words of the speculative fiction writer, William Gibson: “The future is here, it’s just unevenly distributed”.

I am a professor of philosophy of science and the leader of a group concentrating on ethics at the Centre for AI Research. I focus on ensuring that these technologies are developed in a human-centered way for the benefit of all. In order to achieve this, we need equal education, actionable regulation, and true inclusion. These objectives are very far from being met on a global scale, and certainly are not met everywhere in Africa. 

This presents a serious moral dilemma to a country such as South Africa. Do we throw all caution to the wind and focus exclusively on becoming a global player in AI technology advancement as fast as possible, or do we pause and consider what measures are needed to ensure our actions will not sacrifice or imperil already vulnerable sectors of our society? 

The scramble to develop technologies in the hubs of San Francisco, Austin, London and Beijing took place in a more or less unregulated Wild West until very recently. Now, the world is waking up. In June 2020, United Nations secretary general António Guterres laid out a roadmap for digital co-operation, acknowledging that the responsibility for reaching a global agreement on the ethical development of AI rested on the shoulders of the UN’s Educational, Scientific and Cultural Organisation (Unesco).

Unesco is working to build a global consensus on how governments can harness AI to benefit everyone. A diverse group of 24 specialists from six regions of the world met in 2020 and collaborated to produce a Global Recommendation on the Ethics of AI. If adopted by Unesco’s 193 member states, this agreement on technology development will be groundbreaking: instead of competing with one another to corner the market on bigger and faster technology, countries all over the world will be united by a new common vision; to develop human-centred, ethical artificial intelligence. 

One of the biggest obstacles to realising the hope of AI for social good, however, is the silencing of some voices in a debate that should be a universal one. Africa’s best and brightest have been excluded from contributing to the conversation in many ways, ranging from difficulties in accessing visas to not being included in international networks. There is serious and important work being done on the subject in Africa – Data Science Africa and the Deep Learning Indaba, to name two examples. 

This work is often overlooked by the international community whereas, in fact, the opportunity the world has to learn from research in Africa should be grabbed. As Moustapha Cisse, director of Google Ghana says: “Being in an environment where the challenges are unique in many ways gives us an opportunity to explore problems that maybe other researchers in other places would not be able to explore.”

In addition, in December last year, following a high-profile parting of ways with Google, Timnit Gebru, the highly regarded ethics researcher, expressed deep concern about the possibility of racial discrimination being amplified by AI technologies: “Unless there is some sort of shift of power, where people who are most affected by these technologies are allowed to shape them as well and be able to imagine what these technologies should look like from the ground up and build them according to that, unless we move towards that kind of future, I am really worried that these tools are going to be used more for harm than good.”

Gebru’s fears are born out by a plethora of examples from racist facial recognition technology to racist predictive policing tools and financial risk analysis. Gebru makes the call that technical communities should be challenged to be more diverse and inclusive, because inherent structural bias in training data would then have a bigger chance of being picked up. 

It is also becoming very clear that every person has a role in ensuring that innovation in the field upholds human rights, such as the right to privacy, or the right not to be racially discriminated against. Every person should have access to education, should be sensitised to the ethics of AI and be information literate; every person should have access to positions in tech companies and be able to participate in technological invention, and every person should be protected against possible harm from technologies in an effective and actionable way. 

Furthermore, regulations need to be actionable, legally enforceable and as dynamic as the ethics underpinning them.

First, we must guard against lofty ideals that are alien to the world of mathematics and algorithms that computer engineers inhabit. It’s key we acknowledge the active multi- and interdisciplinary nature of the discipline of AI in its full extension in our classrooms, places of work, and governmental settings.

Second, regulation should be armed with legal force. It is too easy to shirk regulations by citing in-house policies, or shifting some development to countries with weaker legislation in some areas.

Third, AI ethics regulation should be supple enough to absorb future technological advances as well as changes in the AI readiness status of different countries which ranges along a continuum of scientific, technological, educational, societal, cultural, infrastructure, economic, legal, regulatory dimensions. 

Since any new AI application can be bought or sold anywhere in the world, and since “ethics dumping”— a term coined by the European Commission and applied in the context of the ethics of AI by the well-known ethics of information expert, Luciano Floridi, referring to big companies simply taking their business where regulation is weaker — is a real thing in Africa, the new rule book on how AI technologies are developed, must be a global rule book.

As Teki Akuetteh Falconer, Ghanaian lawyer and executive director of Africa Digital Rights Hub said: “I’m a data protection regulator but unable to call big tech companies to order because they’re not even registered in my country!”

If Unesco’s member states adopt the ethics recommendations, it could pave the way for realising the potential of AI technologies that benefit us all.

Subscribe to the M&G

Thanks for enjoying the Mail & Guardian, we’re proud of our 36 year history, throughout which we have delivered to readers the most important, unbiased stories in South Africa. Good journalism costs, though, and right from our very first edition we’ve relied on reader subscriptions to protect our independence.

Digital subscribers get access to all of our award-winning journalism, including premium features, as well as exclusive events, newsletters, webinars and the cryptic crossword. Click here to find out how to join them.

Emma Ruttkamp-Bloem
Emma Ruttkamp-Bloem heads the University of Pretoria's department of philosophy. She is the research group lead of the ethics of AI group at the Centre for AI Research and she is a member of Unesco’s World Commission on the Ethics of Scientific Knowledge and Technology and of the African Union high level panel on emerging technologies. She is also the general chair of the upcoming Southern African conference for AI research. Her views are her own, and not Unesco's or that of the University of Pretoria.

Related stories

WELCOME TO YOUR M&G

If you’re reading this, you clearly have great taste

If you haven’t already, you can subscribe to the Mail & Guardian for less than the cost of a cup of coffee a week, and get more great reads.

Already a subscriber? Sign in here

Advertising

Subscribers only

Mbeki tells ANC that land without compensation goes against the...

‘This would be a very serious disincentive to investment,’ says Thabo Mbeki in a document arguing that the ANC should not proceed with the Constitutional amendment of section 25

Micro-hydropower lights up an Eastern Cape village

There is hidden potential for small hydropower plants in South Africa

More top stories

Gigaba says it was ‘an unfortunate coincidence’ SOEs were captured...

The former public enterprises minister says he was deliberately removed from state companies' dealings and could not have learned of the looting

SIU freezes R22-million in Digital Vibes accounts

The Special Investigating Unit said it would ask the tribunal to declare the health department’s contract with the company unlawful

Life-saving free train travel offered to domestic abuse victims in...

A pioneering railway scheme in the UK is helping domestic violence victims to escape their abusers by providing them with free travel to reach refuge

Oral submissions to inquiry on local government elections start next...

The hearings will be open to the media and the public, under strict level-three regulations
Advertising

press releases

Loading latest Press Releases…
×