/ 22 June 2025

AI in African education: We need to adapt it before we adopt it

Computer literacy: In rural areas
Using AI without critical reflection widens the gap between relevance and convenience.

Imagine a brilliant student from rural Limpopo. She presents a thorough case study to her class that is locally relevant and grounded in real-world African issues. Her classmate submits a technically perfect paper filled with American examples and Western solutions that don’t apply to a rural African setting. The difference? Her classmate prompted ChatGPT and submitted a paraphrased version of its response. 

This example highlights an uncomfortable truth — generative AI is reshaping teaching and learning in higher education but, without critical reflection, it risks widening the gap between relevance and convenience.

The recent Daily Maverick article on the “CheatGPT” crisis captured a significant tension. The vast majority of large language models such as ChatGPT weren’t built with African realities in mind. Their training data privileges Western knowledge, history and frameworks. Yet across Africa, these tools are being rapidly integrated into our educational systems and often with little interrogation of their cultural biases or pedagogical implications. 

This poses obvious risks, such as the unintended consequences of imposing Global North solutions onto vastly different educational, technological and socio-economic contexts. For example, an AI tool calibrated for English-speaking, well-resourced school systems could reinforce exclusion in multilingual classrooms or among students with limited internet access.

A more subtle, longer-term concern is the growing influence of digital colonialism — the way global tech platforms shape what knowledge is visible, whose voices matter and how learning happens. In higher education, this risks weakening our academic independence and deepening reliance on systems that were never built with our contexts — or our students — in mind.

Banning AI tools is not a solution. The question isn’t about whether to use AI or not, it’s how to do so with care, strategy and sovereignty.

Too often, institutions swing between extremes of uncritical techno-optimism (“AI will solve everything”) and fearful rejection (“Ban it before it breaks us”). Lost in the middle are students who lack guidance on responsibly working with these tools and shaping them for African futures.

When an African law student queries ChatGPT, they’re often served US case law. Ask for economic models, and the results tend to assume Western market conditions. Request cultural insights and Western assumptions are frequently presented as universal truths. 

It’s not that AI tools can’t provide localised or African-specific information, but without proper prompting and a trained awareness of the tools’ limitations, most users will get default outputs shaped by largely Western training data.

Our African perspective risks being overshadowed. This is the hidden curriculum of imported AI — it quietly reinforces the idea that knowledge flows from the North to the South. African students and lecturers become unpaid contributors, feeding data and insights into systems they don’t own, while Silicon Valley collects the profits.

So, what’s the alternative? What is needed is a technocritical approach which is a mindset that acknowledges both AI’s promise and pitfalls in our context. The five core principles are:

Participatory design: Students and academic staff are not just users but co-creators, shaping how AI is embedded in their learning.

Critical thinking: Learners are taught to critically interrogate all AI outputs. What data is presented here? Whose voices are missing?

Contextual learning: Assignments require comparing AI outputs to local realities, to identify more nuanced insights and to acknowledge blind spots.

Ongoing dialogue: Hold open and candid conversations about how AI influences knowledge in and beyond our classrooms.

Ethics of care: Advance African perspectives and protect against harm by ensuring that AI use in education is guided by inclusion and people’s real needs — not just speed or scale.

The shape of AI in African education isn’t pre-ordained. It will be defined by our choices. Will we passively apply foreign tools or actively shape AI to reflect our values and ambitions?

We don’t need to choose between relevance and progress. With a technocritical approach, we can pursue both — on our terms. Africa cannot afford to adopt AI without adaptation, nor should students be passive users of systems that do not reflect their reality. This is about more than access. It’s about  digital self-determination — equipping the next generation to engage critically, challenge defaults and build AI futures that reflect African voices, knowledge and needs.

AI will shape the future of education, but we must shape AI first. Africa has the opportunity not just to consume technology, but to co-create it in a relevant way. A technocritical approach reminds us that true innovation doesn’t mean catching up to the Global North — it means confidently charting our own course. 

Dr Miné de Klerk is the dean of curricula and research ([email protected]) and Dr Nyx McLean is the head of research and postgraduate studies ([email protected]) at Eduvos.