The use of artificial intelligence in arbitration may support justice, but it cannot replace those who are tasked with safeguarding it
South Africa has no legal framework to govern artificial intelligence (AI) use in alternative dispute resolution (ADR) proceedings, placing at risk the preservation of the principles of fairness, accountability, transparency and confidentiality when machines join the table.
To offer much-needed direction for parties and tribunals integrating AI into adjudications and arbitration, the Association of Arbitrators (Southern Africa) (AASA) issued AI guidelines in May 2025 on the use of AI tools in this environment.
AI tools are already embedded in arbitration proceedings in South Africa and are assisting with numerous legal tasks. Such tasks include collating and sequencing complex case facts and chronologies; managing documents and expediting the review of large volumes of content; conducting legal research and sourcing precedents; drafting text (for example, legal submissions or procedural documents); and facilitating real-time translation or transcription during hearings.
While the new AI guidelines are not exhaustive nor a substitute for legal advice, they provide a helpful framework to promote responsible AI use, protect the integrity of proceedings and balance innovation with ethical awareness and risk-management. As a starting point, the guidelines stress the importance of parties reaching agreement upfront on the use of AI, including whether the arbitrator or tribunal has the power to issue directives regarding their use.
The use of AI in arbitration proceedings can easily result in confidentiality and data security risks. One of the key advantages of arbitration is the confidentiality of the proceedings that it offers, as opposed to public court proceedings. This can be threatened by irresponsible use of AI by the parties or the tribunal, and expose the parties to risk.
The use of AI tools can also result in technical limitations and wavering reliability. AI tools can produce flawed or “hallucinated” results, especially in complex or novel fact patterns. This can lead to misleading outputs or fabricated references. AI tools are well known to fabricate case law references to answer legal questions posed to them.
The AI guidelines highlight core principles that should be upheld whenever AI is used. These include that tribunals and arbitrators must be accountable and must not cede their adjudicative responsibilities to software. Humans ultimately bear responsibility for the outcome of a dispute.
Ensuring confidentiality and security is also a key principle. For example, public AI models sometimes use user inputs for further “training”, which raises the risk that sensitive information could inadvertently be exposed.
The need for transparency and disclosure is also important and parties and tribunals should consider whether AI usage needs to be disclosed to all participants.
Finally, fairness in decision-making is paramount. There is a risk of underlying biases or inaccuracies in AI-generated outputs due to training data biases. Human oversight of any AI-driven analysis is indispensable to ensure just and equitable results.
The guidelines advise tribunals to adopt a transparent approach to AI usage throughout proceedings, whether deployed by the tribunal itself or by the parties. Tribunals should also consider obtaining explicit agreement on whether, and how, AI-based tools may be used and determine upfront if disclosure of the use of AI tools is required.
Safeguarding confidentiality should be considered upfront and throughout the proceedings, and agreement should be reached on what information can be shared with what AI tools to ensure parties are protected.
During hearings, any AI-driven transcription or translation services should be thoroughly vetted to preserve both accuracy and confidentiality. Equal access to AI tools for all parties should be ensured so that no party is prejudiced.
Ultimately, the arbitrator’s or adjudicator’s independent professional judgment must determine the outcome of any proceeding, even if certain AI-generated analyses or texts help shape the final award.
As disputes become ever more data-intensive and as technological solutions proliferate, parties, counsel and tribunals must consider how best to incorporate AI tools into their processes. The guidelines affirm that human adjudicators remain the ultimate decision-makers.
Vanessa Jacklin-Levin is a partner and Rachel Potter a senior associate at Bowmans South Africa.