Matric marks adjusted only if necessary
The standardisation of national examination results generates public interest whenever Umalusi announces the approval of the results at its annual media briefing. Many educational commentators have weighed in on the issue recently and on the methodology of standardisation.
One of Umalusi’s responsibilities as a quality council in basic education is to ensure that the assessments and examinations it is responsible for are of an appropriate standard. One of the qualifications that Umalusi assures is the national senior certificate (NSC).
Need for standardisation
Standardisation is the moderation process used to mitigate the effects caused by exam-related factors, other than the pupils’ subject knowledge, abilities and aptitude, which affect their performance.
The standardisation of examination results is necessary to take care of any variation in the standard of the question papers, which may occur despite careful moderation, as well as variations in the standard of marking that may occur from year to year. Other variables include undetected errors and pupils’ interpretation of questions.
During the standardisation process (which also involves statistical moderation), qualitative input from external moderators, reports by internal moderators and post-examination analysis reports, and the principles of standardisation are considered.
Standardisation is necessary to achieve comparability and consistency of examination standards over years to mitigate the variables that affect pupil performance from one year to another, for example cognitive demand and the varying difficulty of questions, marking, curriculum changes and interventions.
Standardisation aims, in the main, to achieve an equivalent standard of examination over the years, of subjects and of assessment bodies and to deliver a relatively constant product to the market: universities, colleges and employers.
We can expect that when standards of examinations are equivalent there should be some corresponding statistical mark distributions.
This principle of correspondence forms the basis for comparing distributions with the norms/historical averages that are developed over four to five years. This comparison includes medians, means, passes, failures and distinction rates, and pairs analyses, which play a valuable role in the absence of historical data.
The adjustments (decided by the assessment standards committee of Umalusi) consistently follow guiding principles. The committee comprises academics with extensive experience and expertise in statistical moderation, statistics, assessment, curriculum and education.
Although the final stages of the process, namely standardisation, may seem highly statistical, this adjustment is the culmination of a long process of receiving and reflecting on qualitative and quantitative inputs.
This starts with the setting of papers, then moderation, writing of exams, marking of exams, verification and only finally the adjustment of mark distributions.
Given the complex nature of the stages and processes followed, it can lead to misinterpretations, especially if one observes any one of the stages in isolation or just the final one. The whole process of standardisation is the basis for Umalusi to declare exams fair, valid and credible, thereby building public trust and confidence.
Standardisation is an international practice, and all large-scale assessment systems use some form of standardisation.
The method used by Cambridge International Examinations involves comparing the mean and standard deviations of the current exams with those of previous years.
This data is then used to set the grade boundaries — for example, an A could be 80% and above in one year, and 75% the following year, depending on the data.
This system is also used by several African countries whose educational systems are still closely aligned with the Cambridge system.
The method used in South Africa is that of norm referencing.
Principles and assumptions
One of the main assumptions underlying standardisation is that, for sufficiently large populations (cohorts), the distribution of aptitude and intelligence does not change appreciably from year to year, so one can expect the same performance levels from cohorts of roughly the same size over time.
The standardisation process is based on the principle that, when the standards of examinations (from one year to the next) are equivalent, there are certain statistical mark distributions that correspond with them, or should be the same, apart from unintended statistical deviations.
Standardisation is a statistical moderation that consists of comparisons between the mark distributions of the current examination and the corresponding average distributions of a number of past years to determine the extent to which they correspond.
If there is good correspondence, it can be accepted that the examinations were of an equivalent standard. If there are significant differences, the reasons for those differences should be established.
On occasion, these differences may be because of factors such as a marked change in the composition of the group of candidates offering a particular subject, poor preparation for the exams because of some disruption in the school programme, or very good preparation because of special support from educators.
In the absence of valid reasons for the differences, it should be accepted that the differences are because of deviations in the standards of the examination or of the marking, and the marks should be adjusted to compensate for these deviations.
In view of the department of basic education’s policy regarding progressed pupils, a breakdown of the statistical mark distributions, including and excluding the progressed pupils, was provided to the assessment standards committee, but generally the difference between them was considered to be small.
Furthermore, because progressed pupils have in recent years been part of the cohort who wrote the NSC, but not identified as such, their marks would have been included in the historical average.
Standardisation decisions are finalised at a meeting between the assessment body and Umalusi. The assessment body presents its results after completing an analysis of its examination results, with a view to identifying any unexpected results, idiosyncrasies and cases deserving special attention.
Subjects are moderated independently and the decision taken on one subject has no influence on those taken on other subjects.
The results are also examined in light of interventions that have been implemented in the teaching and learning process, shifts in pupil profiles, and so on. The assessment body makes sure that it has a thorough understanding of which adjustments would be appropriate, and what they would like to propose in this regard at the standardisation meeting with Umalusi.
The standardisation process compares the statistical distribution of the raw examination marks of the current examination with the predetermined historical average distribution of the raw marks over the past five years, and considers the adjustments required to bring the distribution of raw marks in line with the expected distribution, taking into consideration the comparative subject analysis and moderation, and marking reports.
Umalusi will only consider adjustments where there is compelling evidence that it is necessary to do so, in which case the following may occur:
- If the distribution of the raw marks is below the historical average, the marks may be adjusted upwards to the historical average, subject to the limitation that no adjustment should exceed half of the actual raw mark — half of what the candidate got — or 10% of the maximum marks for the subject.
- If the distribution of the raw marks is above the historical average, the marks could be adjusted downwards to the historical average, subject to the limitation cited above.
Standardisation offers at least some confidence of comparability between successive examination standards, thus giving candidates equal opportunity over the years, regardless of possible deviation in the standard of the question paper the candidates wrote.
It must also be noted that examination test items are not pretested and calibrated. It is hoped that as the assessment systems start to use pretested items the need for standardisation at the back end of the examinations will be minimal.
Finally, it must be emphasised that mark adjustments do not compensate for the effects of poor teaching or learning. Their sole purpose is to ensure that equivalent standards are maintained over the years for the different assessment bodies.
Mafu Rakometsi is the chief executive of Umalusi, the Council for Quality Assurance of General and Further Education and Training in South Africa.