/ 6 August 2025

Of blindness in brinjals, bananas and peaches: What would an AI tool say?

Whatsapp Image 2025 07 08 At 04.18.59
Eastern Cape judge president Selby Mbenenge. (Office of the Chief Justice/ S Lioners)

It is a well-known cook’s trick to salt brinjals before cooking them — the salt releases the water and allows the brinjals to absorb the flavour of the intended spice. The evidence led at the Judicial Conduct Tribunal in May and June 2025 into the raunchy chain of WhatsApp messages between Eastern Cape High Court Judge President Selby Mbenenge and court secretary Andiswa Mengo reminds me of this process. 

The tribunal is grounded in a sexual harassment matter that Mengo launched in January 2023 with the Judicial Conduct Committee. Counsel on both sides sought to enlist expert testimony on intimate WhatsApp emojis, hauling them into the public space and rubbing salt into the outwardly salacious combinations of text requests and brinjal emojis to release their underlying meaning. 

To spice it up, views on cultural norms and non-verbal “nos” muddy the waters of what is considered workplace appropriate and what is not. Add this to the further complication of the workplace power dynamics between a male superior (Mbenenge is 64 years old) and a female subordinate (Mengo is 40). 

Rendering justice from this melting pot is proving to be a lengthy process. The tribunal is set to resume in late October 2025. 

I have previously written in an article titled Sexual Corruption — Consent: Same but Different on how gendered and cultural norms dictate power in sexual corruption matters. For it, consent and the context in which verbal and non-verbal “nos” can be expressed (or not) in the 2024 high court case of The Embrace Project NPC vs Minister of Justice and Correctional Services and Others was examined.

Workplace conduct in matters of sexual appropriateness is ripe for re-examination, as are the ways in which justice for women should devolve, more especially since many workplaces are seized with the tension between traditional workplace conventions and different generational expectations of norms and traditions.

A new social compact, akin to the 1955 Freedom Charter adopted by the Congress of the People, is being conceived this year. It is called “a pathway to a people’s plan”, according to the State of the Nation address 2025. The National Dialogue 2025 is being framed as a call to action. South Africans should be ready to grapple anew with how justice, freedoms and universal rights are distributed in the kind of society to which we profess allegiance. 

By 2030, the World Economic Forum tells us, young Africans are projected to make up 42% of the world’s youth and account for 75% of those under 35 years of age. How we meet the needs of the youth in a traditional conception of a workplace will be a clash of the traditional versus the new. 

In South Africa, women face a double burden of unemployment. Statistics South Africa tells us that, in the age cohort of 15 to 34 years, the rate for women not in employment, education or training sits at 48.1%. Compare this to 42,2% for men and the underlying precariousness of job security for women in this age group makes Mengo’s claim of sexual harassment even more stark. 

How do we expect workplaces to operate, considering prevailing norms, new ones and diverse cultures — and what would drive appropriate behaviour? What is appropriate behaviour, according to culture, is contentious. A discussion on it risks delving into cultural and moral relativism — a slippery slope of bending and obfuscating morality at the altar of culture. 

The Oxford English Dictionary defines cultural relativism as the theory where beliefs, customs and morality exist only in relation to the particular culture in which they originate and are therefore not absolute. 

This theory creates a direct tension with the concept of universal rights, the Freedom Charter and the Bill of Rights that our Constitution and our society is founded on.

As lawyers, we try to focus our arguments on what is appropriate or not by aligning them with the precepts of what justice should look like in a free, impartial and equal society. Anything else risks circular arguments on whose culture trumps the others’ and who bears the most harm sacrificially to the trump.

Justice, according to John Rawls in his seminal book A Theory of Justice, comprises the principles that free and rational persons agree to when they are all placed in the original position — behind a veil of ignorance where there is no class, no race, no gender, no knowledge of their abilities, intelligence, strengths nor even the conception of good.

Justice is also popularly depicted as a blindfolded woman holding perfectly balanced scales. Underlying the notion of blind justice would be the ways in which appropriate behaviour gives effect to justice. In society, laws are one way of regulating behaviour. Cultural norms represent another. And this is where the difficulty lies. 

Our legal system has, since 1994, has represented a contextual typology. This is intentional. It can give effect to the type of justice that is aspirational of blind goodness, rooted in the Constitution, while at the same time remaining acutely aware of the context of inequality created through the legally enforced system of apartheid. 

So, the constitutional court’s judgments have tended to view context and the distribution of justice through this lens. And so, further, it can be argued that the law is mostly blind within a given human and social context. And that context is left in the hands of the human-in-the-loop — the expert judges in whom we have placed our trust to interpret our laws against the bedrock of universal rights so that the good is maximised.

Would cultural norms have the same gravity? What if some cultural norms enforce behaviour that does not maximise the good and instead promotes harm, that is not acceptable in society today? Honour killings in rural, tribal Pakistan; female genital mutilation in tribal North Africa; teen pregnancies by sugar daddies in South Africa all contain elements of cultural norms and societal acceptance. All cause harm.

If we are to argue that justice should mostly be blind and impartial, while maximising the good, then should the argument on culture not take a back seat to the laws in place to regulate appropriate behaviour?

In trying to be as agnostic as possible on what justice should look like, an AI tool to discern appropriate behaviour in a workplace might prove useful to an appointed judicial arbiter on such cases. If the tool is built on the framework of existing harassment legislation, like the Employment Equity Act and the 2022 Harassment Code, it could benchmark the definition of misconduct against tested workplace behaviour. 

The Harassment Code begins its definition of harassment as “unwanted conduct that impairs dignity”.

It could then be applied against a dedicated, unique dataset derived from decided case law to extract the scraped data. This evaluation could take into account further datasets in mitigating risky scraping by the tool in the form of human intervention. Humans could input into the tool’s decision-making process by building parameters of conduct on what is appropriate and what is not in a workplace. By also adding a predictor indicator on consent — forms of consent and what they look like — the tool’s evaluation could be bolstered.

The results could prove startling. 

Unwanted harassment in the workplace could then be tested against the legal definition in a tool that is designed to scrape from a closed dataset of prior decided cases on harassment conduct, benchmarked against human inputs on parameters and consent. 

In simple terms, a chain of WhatsApp messages could be uploaded into the tool, which would churn out an expression of harassment, tested against a consent indicator, as a probability percentage. It would convert an expression of legislative intervention into an automated probability — translating the likelihood of harassment into a barometer. It could also be styled to perform a similar task with user prompts on other types of conduct, besides WhatsApp messages, that could be uploaded for evaluation, ChatGPT-style. A tool like this could be a game-changer.

Responsible AI guidelines from current international best practices dictate that it would need to include:

  • Clear, targeted ethical prompts; 
  • Guardrails with interventions to ensure responsible framing of the prompts;
  • Defined databases, so that scraping is ethical and accurate. For example, the database could include examples from decided cases on what type of conduct constitutes harassment, what “unwanted” means, how “no” has been conveyed before, how not saying “no” has been conveyed, how many actions make conduct unwanted etc; and 
  • Ensuring that the model inputs are overseen by humans-in-the-loop who can intervene and mitigate risk of error and biases.

Crisp, blind, without obfuscation and without relativism, the tool could flag risky behaviour and work at prompting more appropriate behaviour and, in doing so, shift norms in the workplace and between genders.

Could it be that simple? There are obvious hazards. Context is one. I can hear lawyers vociferously arguing against a blunt and crass tool such as this one to weed out errant, inappropriate behaviour. I can see the potential for error, for over-simplification and for binary results. For deriding the human value in judgments and for reducing complicated evaluations to crass AI tools. 

Intervention, in the form of a human-in-the-loop, in all AI tools, mitigates against such risks. In a similar fashion a judicial arbiter would be the expert human-in-the-loop implying that we would need to place our trust in our learned judges to render a decision on the tool’s evaluation — but can we do that? 

I would contend that, as seasoned consumers of many types of social media apps and other AI tools, we have already delegated autonomy and trust to anonymous algorithms in myriad ways. Virtual recommendations dictate our music choices, our interest groups, our food choices, our holidays and our political thought. 

Our trust (and privacy) has already been freely handed out with neither reservations nor questions about who the human arbiters driving the algorithms on the back-end of these apps are. Never mind that the human arbiter is not an expert in justice, not driven by maximising the good and not rooted in universal moral and legal principles like judges are.

Not such a big jump now, is it?

Luthfia Kalla is an anti-corruption compliance lawyer with a special interest in ethics and following the (illicit) money.