AI should be seen as an enabler rather than a replacement for human workers. (File photo)
Artificial intelligence (AI) has become accessible and user friendly, accelerating global development. But rapid advances raise complex issues, from ethics to cybersecurity and governance, and many sectors are struggling to keep pace with the challenges posed by AI.
The legal sector is no exception. Africa’s first interdisciplinary AI and law conference, held at the University of Cape Town from 3 to 5 July, led by international law and tax firm CMS, sought to guide the industry forward. This summit marked a key moment in the continent’s efforts to address AI’s benefits and risks.
European laws set the tone while Africa charts its own path
The EU’s AI Act is lauded for taking a robust approach to regulating high-risk AI applications. The Act also speaks to key principles of managing AI responsibly, while adhering to strict transparency and accuracy standards.
This law introduces a concept akin to a “nutrition label” for AI. It aims to increase transparency around the data used to train AI models and decision-making processes and stresses the importance of human oversight and ensuring that it does not operate without proper monitoring and intervention.
It is expected that the Act will serve as a template for AI legislation around the world but how will this play out in Africa where the context is vastly different?
South Africa’s approach to AI and electronic communication legislation has progressed incrementally over the years. Most of the AI adjustments have come from alterations to existing laws to regulate or accommodate its use. A good example is the Medicines and Related Substances Act which was amended to regulate medical devices that use AI. Similarly, legislation has been developed for the aviation sector to govern the use of drones with AI capabilities.
These incremental changes indicate that South Africa is in the process of developing comprehensive regulations to address the complex challenges AI presents.
On a continental level, the African Union’s efforts to draft guidelines for AI legislation signal a coordinated push toward a more regulated future, with the focus on both enabling AI innovation and managing its risks across Africa. The goal is to ensure that the continent is prepared to address the legal and ethical issues AI presents while encouraging its development and use.
AI challenges to governance
AI governance is all about the rules, processes and standards that organisations need to put in place to ensure the responsible development and deployment of AI. This technology presents various challenges and complexities, including compliance with regulations and ethical and operational risks.
Consider its use in settings like customer service. Should an AI customer service representative be manipulated by external parties, for example, it could behave in unintended ways. If attackers discover weaknesses in the AI’s programming or decision-making processes, they can alter the system’s behaviour, creating risks for both the organisation and its customers.
Similarly, the development of AI models can be influenced by the biases of individuals within the organisations which develop them.
This means organisations that fail to monitor AI systems risk reputational damage, regulatory penalties and operational failures. This makes continuous monitoring and evaluation of AI systems for accuracy, bias and explainability crucial.
Organisations must consider robust and flexible governance frameworks that are able to adapt to the constant changes that occur as AI models are developed. Such frameworks should in turn be centred around the principles of transparency, accountability, fairness, privacy and security to cater for the multifaceted nature of AI.
Tackling cybersecurity problems
Beyond the potential manipulation of AI systems by outside threat actors, another concern is that the very data used to train and fine tune AI models could be targeted. AI systems that rely on vast amounts of data, particularly personal and other sensitive data, could become targets for cyberattacks. Hackers could attempt to access these datasets to steal or manipulate personal information, which could be used for fraud, identity theft and other malicious activities.
This points to the need for strong data protection and cybersecurity mechanisms as organisations build or make use of AI models.
Organisations must adopt meaningful measures to mitigate these risks. This requires a multi-faceted approach which includes conducting a risk assessment and implementing policies, procedures, controls and governance frameworks. It is crucial to have an effective response plan in place to deal with any cybersecurity incidents that might arise as well as to conduct regular monitoring and evaluation of the measures adopted.
The private sector
Bringing AI systems into business operations comes with the advantage of automation and efficiency gains. While some systems present opportunities to close skills gaps, companies will still need to adapt to AI-driven changes in both workforce dynamics and customer expectations.
AI should be seen as an enabler rather than a replacement for human workers. With this approach AI supports employees in making better decisions, automating routine tasks and giving them the ability to focus on higher-value work. This, in turn, helps companies become more productive and competitive.
It is understandable that concerns exist around what kinds of jobs could be automated away with AI. However, many industries in South Africa are still experiencing a significant skills gap, especially in technical and specialised areas. In these cases, AI can be used as a tool to augment the capabilities of employees, helping them perform tasks more efficiently and with greater confidence.
Then there remains the question of how to leverage AI to remain competitive. Business leaders are increasingly being asked what their companies are doing with AI and there is growing pressure to incorporate it into strategies to drive growth.
Adopting AI is not just about deploying the technology — companies must be able to manage and scale AI applications effectively. This includes having the right infrastructure and talent to support AI initiatives, as well as ensuring that AI is integrated into the company’s broader innovation strategy.
Although AI’s transformative impact spans sectors, its use in governance, customer service and workforce productivity is especially significant. As organisations embrace its potential, they must carefully balance innovation with responsible oversight and compliance.
Structured governance, continuous monitoring and regulatory adaptability are key as Africa develops its own AI frameworks. Ultimately, success will hinge on aligning innovation with ethical and legal responsibilities.
Zaakir Mohamed is director, head of corporate investigations and forensics; Kabelo Dlothi is director, co-head of corporate and commercial and Lebogang Molebale is a senior associate for corporate and commercial, at CMS South Africa.