/ 3 February 2009

Critics should do their homework

The shrill tone of much of the comment that follows the annual publication of the National Senior Certificate (NSC) results is perhaps understandable, given its importance not only to the future life prospects of candidates, but also to the development of the country.

But the ill-informed nature of much of the commentary is inexcusable, as so much information is available to sustain the debate at a much higher level. I will confine myself to the three issues that concern most critics: the numbers passing maths, the standard of those passes and the pass rate of the NSC as a whole.

In 2004 the Centre for Development and Enterprise (CDE) published a report on the state of maths and science in high schools. Based on an analysis of the aggregate marks of individual candidates conducted by Professor Charles Simkins, the CDE report concluded that, although higher grade (HG) maths passes had remained static, between 20 000 and 25 000, for a number of years, there were sufficient numbers of learners in the system with the ability at least to double these figures.

Two factors inhibited this achievement, effectively preventing thousands of candidates from gaining entry to engineering and the other high-skill careers so desperately needed in the country. The first constriction was that many schools simply did not offer maths on the higher level, if at all. The second problem was that many schools that did offer HG maths persuaded the majority of learners to take it on the easier standard grade level, even though large numbers of them passed with A, B or C symbols (60% or higher).

The point is well illustrated in the table below, which gives support to the CDE hypothesis that the number of HG maths passes could be doubled if all learners with the ability to do so were given the opportunity.

To achieve this aim the new further education and training (FET) curriculum, culminating in the first NSC exam in 2008, simply changed the choices available to candidates, making it compulsory for all learners to register either for maths or maths literacy. The result: 287 487 candidates wrote maths, compared to the 41 000 and 306 000 who
sat the HG and SG exams, respectively, in 2007 (249 784 wrote maths literacy in 2008).

Concerned by the prospect of many learners with low-quality maths passes knocking at academy doors, the universities declared a 50% pass in maths in 2008 to be equivalent to a HG pass in 2007, a level achieved by 63 034 candidates in 2008.

”Ah,” cried the critics, ”this could have been achieved only with a massive lowering of standards.”

From another perspective, the new NSC curriculum presents an elegant administrative solution to what was an obdurate systemic blockage.Perhaps the critics have something of a point: an increase to 63 034 of candidates getting the equivalent of HG maths compared to the 54 305 predicted in Table 1 may be a little high and the universities may want to adjust their benchmark of maths HG equivalence upwards.

A 26% increase in university exemptions in 2008 (from 85 000 to 107 000) may also be on the high side, although this increase was also achieved in 2007.

A drop in standards?

Let’s examine the evidence relating to standards. The first question to ask is: what is the best measure of standards? The public debate on the NSC is firmly fixed on the pass rate as the primary indicator of quality. This view was encouraged by the education department in the years 1994 to 2003 when it set the pass rate as its indicator of success. Pass rates declined steadily from 1994 to 1999, dropping from 58% in 1994 to a low of 49% in 1999.

One of the first steps taken by Kader Asmal, when appointed education minister in 1999, was to launch a campaign to improve SC pass rates. The effects were immediate, with rates rising annually to 73% in 2003.

The minister and the department were triumphant, declaring victory for their policies and claiming that schools were operating more effectively. But deeper analysis by Umalusi and others since 2003 strongly indicates that the bulk of these effects were achieved by manipulating the results by means of four measures: eliminating high-risk candidates (notice the rapid drop in the number of candidates from 1999 to 2003), encouraging candidates to register for the easier standard grade level, lowering the standard of examination questions and using political arguments rather than statistical techniques to raise raw scores during the moderation process. It is likely that, at best, only a small fraction of the rise in SC results can be attributed to improved school quality.

Following the release of these findings in 2003, Naledi Pandor’s ministry and the department, in cooperation with Umalusi, embarked on a systematic process of improving curriculum and assessment standards.

The effects of these efforts are seen over the period 2004 to 2008. During this time the pass rate declined (giving rise to much public condemnation of the department), whereas the number of candidates rose (showing improved throughput in the school system), the number of passes increased (providing more matriculants with better life chances) and the standard of papers was improved (through the inclusion of greater numbers of questions demanding higher cognitive skills from candidates and the addition of a paper in all languages).

There is still a long way to go before the system reaches acceptable quality benchmarks at all levels, but a marginal drop in pass rates seems a small price to pay for the three positive developments in the past five years. Yet public opinion, fixed on one (unsuitable) indicator, remains unimpressed.

Umalusi shares the department’s view that the most valid indicator of school quality is the cognitive standard of the curriculum and assessment system. This is reflected in the research conducted by Umalusi in 2008, which looked at the comparative levels of cognitive demand of the 2007 and 2008 curricula and examination papers in certain key subjects, including English second language (taken by more than 80% of candidates), maths and maths literacy.

The maths literacy report concluded that the paper could have been pitched at a higher level. After considering all the qualitative reports, the Umalusi statistics committee adjusted maths literacy raw scores downwards before release and recommended that a greater number of challenging questions be included in the final paper in future.

A divided jury
The jury is divided on maths, which is probably a sign that the standard of 2008 was of the same order of magnitude as that of 2007. There is some evidence that 2008 lacked challenge at the top of the range, reflected in a large increase in the number of distinctions. This is likely, in that the third maths paper, which contained the most difficult topics, was optional in 2008 and only 11 174 (4%) candidates wrote the paper. But it would seem that the body of papers one and two were set at about the right level.

This is an important area of debate and experts are still considering the issue. I believe the department should include the third maths paper in the NSC qualification as soon as is administratively possible, because this is the most obvious way of improving the standard of maths at the top end.

Three kinds of indicators are available for assessing the health of the FET system: quality (measured by the level of cognitive challenge of examination papers), quantity (the numbers passing) and efficiency (measured by the pass rate). All are important, but it is a tall order to improve them simultaneously and, because the overriding problem of South African schooling is quality, this must take precedence.

It is clear that pass rates on their own constitute a rather poor indicator: rising pass rates masked a distinct decline in both quality and quantity in the years 1999 to 2003, whereas falling pass rates in the past five years have been accompanied by clear gains in the quality of the curriculum and exams and the numbers of candidates passing.

There is no doubt that government deserves strong criticism for administrative errors, such as the failure by a number of provinces to get their ducks in a row in 2008, thus delaying the release of close to 10% of results, and for many other things, such as the lack of political will to fire incompetent principals and to instil a stronger work ethic in provinces, districts, schools and classrooms.

But it is counterproductive to make sweeping condemnations of failure on the question of the NSC curriculum and exams when there is evidence to indicate that much progress has been made in this area.

A more sophisticated public debate can play an important role in effecting further improvements here, but that would require a more informed and rigorous analysis
of the available data on the part of the critics.

Nick Taylor is chief executive of JET Education Services and a member of the Umalusi statistics committee. He writes in his personal capacity