/ 17 January 2014

No ANA ‘mess’: they’re a success

No Ana 'mess': They're A Success

RIGHT TO REPLY

Nic Spaull's article on the 2013 annual national assessments (ANAs), titled "Assessment results don't make sense" (December 13), was very disconcerting. This was especially so in the light of the major initiatives Minister of Basic Education Angie Motshekga has undertaken to address the low numeracy and literacy levels in our schools. 

The minister took a bold and courageous step by introducing national assessments for pupils in grades one to six and in grade nine, which aim to identify specific areas of weakness of individual pupils so that teachers can constructively address them. In this light, the step should be applauded. 

Equally, though, the department has clearly indicated that an assessment of this magnitude — one that needs to serve as a diagnostic indicator of performance at both the individual and the systemic levels — requires at least five to seven years before we can confidently say that it is producing data of the reliability and credibility we need. 

This is why, in its national report on the ANAs, the department acknowledged some of the problems Spaull raised, and various other limitations are indeed being dealt with as the assessments evolve. 

In the interim, we have taken the following measures to ensure that the ANAs are reliable, consistent and credible:

• Using panels of experts and teachers to set tests in terms of the national school curriculum;

• Subjecting the tests to pilot studies to ensure they are sensitive to geographical and other known contextual factors that might affect teaching and learning in our schools;

• Exposing the tests to independent evaluations by competent experts;

• Conducting thorough and centralised moderation of the results to take account of possible variations in the marking and moderation done in schools; and

• Running a parallel process of independent verification of the administration, marking and reporting of tests.

Different tests
The department's ANA report acknowledges that one limitation lies in the use of different tests from year to year. This may mean that one year's test could be more (or less) difficult than the previous one.

We also agree that, by last year, the design of the ANAs had not reached the level of technical rigour that we were aiming for. But reaching that depends on a process — one the department is consciously undertaking — that may not be realised as quickly as we would like because of the unique magnitude of the South African ANAs.

We believe such limitations do not detract from the value that the ANAs provide in helping us to identify and monitor the quality of learning and teaching in our schools. 

In addition to the quality-verification measures already mentioned, the department published an open bid to procure the services of a competent service provider to run a parallel verification process of the ANAs in a scientifically selected sample of more than 2 000 schools. 

A consortium made up of Deloitte  and SAB&T was found to be the most appropriate for the tender's requirements and was awarded the bid. 

The consortium's role included:

• Administering and monitoring the administration of the tests in the 2 000-school sample;

• Administering questionnaires for pupils, teachers and principals to collect contextual data that will be used to assess the possible impact of contextual factors on performance in schools that operate in different geographical contexts, among other factors; 

• Marking samples of pupils' test scripts from the 2 000 schools;

• Capturing and analysing all the collected data and pupils' marks; and

• Submitting a report on the verification process.

A significant variation
The results this verification produced were found to be close to the scores in the unmonitored schools, except in the case of the grade three language results to which Spaull referred. This is, as he wrote, a significant variation and we will investigate it.

In the other grades, the scores of  both the verified and the unmonitored schools are shown in the graphic table on this page. 

The point of the verification process was to confirm the results of unmonitored schools and so validate them. The fact that they so closely correlated with each other — as the table shows — suggests that the scores of unmonitored schools were indeed valid. 

In both the home language tests for grades six and nine, the verified scores were found to be higher than the average from the unmonitored schools. The department could have chosen the verified scores in 

this case, but we had in principle already agreed to use the unmonitored scores in all subjects and grades. 

Comparisons are not possible
Spaull also referred to the Progress in International Reading Literacy Study (Pirls), which tested grade fours, and he pointed to the large difference between these test scores and those of the ANAs. But it needs to be understood that these two assessments are different in design, in purpose and the times they were conducted — and so comparisons between these two tests are not possible. 

But a valid deduction from both the ANA and the Pirls is that the performance of our pupils in literacy is still fraught with challenges. Beyond that, these tests' different purposes and designs mean that their results augment rather than contradict each other.

A combination of various departmental and provincial interventions explains some of the improvements in schooling conditions and the ANA results. Key among these, as reflected in the ANA 2013 report, are:

• The improved monitoring of curriculum coverage in schools; 

• The role the department's national education evaluation and development unit plays in this monitoring;

• Continuous teacher development strategies;

• National reading interventions;

• An integrated national strategy to improve literacy and numeracy at whole-school level;

• Support for English as language of learning and teaching;

• Provision of assessment exemplars and support materials; and

• Provision of grade-specific workbooks.

Looking at the senior phase
The ANAs have also provided valuable new empirical evidence about the level and quality of pupil performance in the senior phase (grades seven, eight and nine). 

The department is already working on a strategy to focus interventions in this phase, and it will extend the ANAs to include grades seven and eight this year. 

We will focus attention on continuous professional teacher development in the senior phase. The department has already realised that, while focusing on laying solid foundations in the lower grades for learning, especially in literacy and numeracy, it has not paid enough attention to grades seven, eight and nine. 

Although public focus concentrated on the low grade nine mathematics results in both the 2012 and 2013 ANAs, the department takes a wider view of the general performance in this phase and has already started rolling out appropriate interventions.

We must keep in mind that the prime purpose of the ANAs is to provide feedback to teachers, parents and pupils so that appropriate remedial programmes can be formulated. In terms of this objective, no erroneous feedback has been provided to teachers, as Spaull suggested. 

On the contrary, as many schools and teachers have attested, this feedback has been most valuable. His further claim that the 2013 ANAs were a "muddled mess" is mischievous (to say the least), and unjustifiably discounts the significant impact of the ANAs on pupil achievement in the past three years. 

Mweli Mathanzima is the acting deputy director general for curriculum policy, support and monitoring in the department of basic education