This year the furious national debate occasioned by the annual publication of the national senior certificate (NSC or matric) seems to have lasted longer than usual and probed the issues in greater depth. It is an encouraging development in that it signals a growing public understanding of the complex issues involved in measuring and assuring the standards of schooling.
Quality assurance and standardisation
Building quality in the school system is about improving the knowledge judgment of teachers and the exam process is an important cog in this enterprise. An array of examiners, internal moderators, teachers who conduct internal assessments, markers of the national exams, external moderators and data processors constitute a system that spans the national department of basic education, nine provincial departments and Umalusi, the body mandated by an Act of Parliament to certify the whole process.
It is no small enterprise considering that in 2010 half a million candidates wrote more than four million exam question papers in more than six thousand exam centres.
Standardisation is the last step in the matric exam system and involves a process, the Umalusi chief executive, Mafu Rakometsi, said in the Mail & Guardian in the week the 2010 results were released.
Suffice to say now that standardisation is a key element of most certification systems anywhere in the world, the motivation for which is nowhere better explained than in this extract from the curriculum council of Western Australia in a pamphlet directed at students: “Standardisation ensures that you are not disadvantaged if an examination is harder than usual in the year you do your examination.
“If an examination is harder, your standardised mark in that subject or course may be higher than your raw mark. If, on the other hand, an examination is easier than usual, your standardised mark may be lower than your raw examination mark … Standardised examination marks are used as a common scale …”
Manipulation of the pass rate
If there is an opportunity to change raw exam marks every year, what guarantee does the public have that the process is not used for political purposes, such as increasing key indicators to make the government look good?
This question is particularly pertinent in view of the established facts pointing to some dubious practices in the past. For example, alarmed by declining pass rates from 1994 to 1999, the then department of education (replaced by its basic education counterpart in 2009) put pressure on the system to improve this indicator.
Because the pass rate is a ratio consisting of two numbers — numbers of passes as a fraction of numbers of candidates — it can be improved by changing either or both these quantities. In the period 1999 to 2003 the one that was changed was the number of candidates: fewer children were given the opportunity to write matric whereas the number of passes stayed about the same. The result was that the pass rate went up and the government claimed victory. And it was a victory of sorts: the same number of passes, or slightly more, was achieved with fewer resources, so the system became more efficient.
But should efficiency not be trumped by the greater need to improve opportunity? Should we not rather try to achieve higher pass rates by increasing the numbers of both candidates and passes? In fact, this is what happened in the years 2004 to 2007. Interestingly, because the candidates increased faster than passes in these years, the pass rate declined. So, by prioritising opportunity during this period, the department of education sacrificed efficiency and, because the public recognises only the pass rate as a valid indicator of quality, the government received its usual public roasting.
Ironically, although the 1999 to 2003 period received public approval for its increased pass rate, this was a period of declining quality that was achieved in two ways: encouraging candidates to register at the easier standard-grade level and lowering standards by making the examination papers easier, focusing largely on cognitive skills of an elementary nature at the expense of the higher-order processes of analysis and interpretation. In short, improved efficiency can be achieved by restricting opportunity or by compromising quality, or both, and this is what happened at the time.
As detrimental as the decline in quality of the passes achieved by many candidates in the early 2000s was to our education system, far more damage was done to public trust when political considerations received greater priority during the standardisation process at the expense of statistical and educational criteria. As noted by Umalusi in a report published on this issue in 2004, adjustments performed at statistical moderation meetings were generally upward, frequently by the maximum of 10% allowed.
In addition, “[one] of the consequences of the predominant trend of upward adjustments was a reluctance by the new examination authorities to accept downward adjustments when these were recommended by the Safcert/Umalusi statistics team … This certainly has resulted in an upward movement in pass rates.”
Fortunately, this all changed in 2004 when these dubious practices came to light. A new minister made it clear that the department should allow Umalusi to exercise its statutory function independently, as envisaged by the Act, and that Umalusi’s word on standardisation was final.
How do we guard against manipulation for political purposes in the future? First, Umalusi must be governed, as it is now, by a board of strong people who have no vested interest in the matric results improving or declining and whose primary interest lies in measuring the quality and quantity of schooling. Second, the question of final authority over the standardisation process must remain in the hands of independent statisticians and educationists employed by Umalusi.
Dr Nick Taylor is research fellow at JET Education Services, a visiting researcher at the University of the Witwatersrand and a member of Umalusi’s assessment standards committee. He writes in his personal capacity. This is the first in a six-part series presented by Umalusi and the M&G in the interests of greater public understanding of the state quality-watchdog organisation’s operations