This year has been declared the International Year of Evaluation by the United Nations. The resolution, asking member countries to strengthen their evaluation capacity and to report on progress, was supported by 41 countries. South Africa was not a signatory, despite the participation of its Brics partners, Brazil, Russia, India and China.
It’s not clear why South Africa didn’t sign. There was no mention of the resolution or evaluation in the State of the Nation address, despite the international cry for more accountability and for reform and development to be informed by evidence-based research and for the direct involvement of recipients in policy-making. This is what the Year of Evaluation hopes to achieve.
We have a relatively new but active department of performance monitoring and evaluation in the ministry of the presidency. The department has developed a government-wide monitoring and evaluation policy framework and a national evaluation policy framework; it supports similar frameworks for the treasury and South Africa’s statistical quality assessment framework. The Public Service Commission undertakes systematic reviews of government performance at all levels. South Africa is also a willing participant in the African Peer Review Mechanism. It may be felt by some that we are doing enough in this area.
We also have an active national association, the South African Monitoring and Evaluation Association (Samea), established in 2005. It has held successful conferences with participation by international and African role players, civil society and government.
Global experts have shared knowledge and experience through training opportunities for South Africans, and there has been substantial growth in academic courses on the issue. Samea members serve in international networks of this kind, and the association has a close working relationship with some government departments. This has become a model for other countries to emulate.
The government’s approach to monitoring and evaluation has an evidence-based “philosophy”: policymakers are encouraged to use the best scientific evidence in devising policy to address specific problems. The government initially focused on specific (strategic) outcomes such as quality basic education, reduced crime and a healthy population, making them the focus of government policies and action.
But more can be done to inform the public about government initiatives, objectives, approaches, successes and challenges – particularly those related to evaluation.
Monitoring and evaluation have multiple aims: to inform audiences about the effect of an intervention; to assist government to address the accountability criteria of a specific policy or programme; to improve, clarify or develop an intervention.
Yet evaluations are sometimes only used for symbolic purposes – for show – to give the appearance of action. Scholar Carol H Weiss also noted the existence of “evaluation as subterfuge”, such as the commissioning of an evaluation study as a way to delay decision-making. Some of South Africa’s commissions of inquiry fall into this category.
A major challenge for evaluation is that it operates within a political environment, which can be complex and contested, with multiple stakeholders drawing on different histories and loyalties and informed by contradictory values. Government programmes are created by political forces; decisions and investments are made by cadres loyal to certain political ideals. Any evaluation will have political connotations. A purist, scientific view seeks a value-free approach, one uncontaminated by politics, authority and ideology.
But evaluation is a political as well as a scientific practice. It is up to the evaluator to negotiate and agree on the values the practices will adhere to, and the utility factors to which they will subject the findings to.
Evidence-based policy development has led some governments (thankfully, not South Africa) to see randomised trials as the gold standard for evidence in public policy, despite evidence to the contrary. Such policy development and assessment can improve the quality of policy decisions, but they are based on many factors. Reliance on knowledge produced by experts only can exclude the voices of communities on the ground – ironically, the chief recipients of the relevant programmes.
Service delivery protests are one sign of dissatisfaction: people want better conditions of living; they want attention to be given to specific community issues. Growing informal settlements, continued urbanisation, the growth in unemployment and the apparent failure of schools to address education needs are among the “evidence” ordinary people use to inform their views of our government’s performance. The government, the not-for-profit sector and corporations must address these issues – and produce and share evidence that could change citizens’ views.
In this, President Jacob Zuma’s address missed a major opportunity. We need to send a strong signal to other developing countries that we support more accountability.
We must create mechanisms to engage with communities in assessing interventions and programmes for their effectiveness, to improve them where necessary and possible, to develop new interventions to address communities’ actual needs, and to end programmes that fail to meet their objectives. The purposes of the evaluations must be clear, unambiguous and not for show.
Dr Mark A Abrahams is the editor-in-chief of the African Evaluation Journal