/ 30 June 2008

Time to stop rating researchers

In 1984, during the isolation of the apartheid era, South Africa introduced a rating system for university researchers. The system aimed to keep researchers from leaving the country by offering basic science funding and, until 1996, freeing scientists from the need to write grant proposals. But in the past decade there has been a spate of highly critical reports on the system and most researchers want change.

In 2005, a review panel comprising experts from the United States, New Zealand and South Africa — and chaired by the former president of the Academy of Sciences of South Africa, Wieland Gevers — reported widespread dissatisfaction with the system, and recommended it be reconsidered.

In November 2007, an exhaustive review of the rating system commissioned by Higher Education South Africa (Hesa, the statutory body representing the country’s 23 universities) found ”a growing scepticism and even disillusionment” with the system. The review comprised five reports by separate authors and a synthesis report.

It is highly critical of the National Research Foundation (NRF), saying it had ”neglected to engage with — concerns and criticisms — in most cases only responding to technical and procedural issues.”

Yet, ignoring the recommendations of its own review, Hesa has recently endorsed the NRF’s assertion that there is ”no evidence” to suggest that the rating system ”should be done away with”.

Barely 11% of scholars in South Africa’s higher education sector are rated and the ratings themselves are highly subjective, relying on terms like ”international recognition”, ”leader in a field” and ”proven track record”.

Ratings purport to reflect a researcher’s status within a particular research field, penalising multi-disciplinary work. Abuses are also common.

Hesa’s review cites a 1996 report that found researchers rated as ”established” were more productive than those termed ”leader in a field” or enjoying ”international recognition”. And young researchers ”showing potential” outperformed those with ”exceptional promise”. This was despite the latter receiving almost ten times more funding, on average during the review period, than the former.

The report recommended replacing the evaluation process with a simpler accreditation for proven/able researchers, and allocating funding based on the quality, feasibility and relevance of research proposals.

The NRF system is unusual in rating individual researchers. And, uniquely, the NRF employs two different assessment systems: the standard project review process (comparable to most research councils worldwide) and the individual rating system (which is not). So researchers have to submit both research and rating proposals — and they are getting tired of it.

Another constituent report of the Hesa review notes a steady and significant increase in the number of researchers letting their rating lapse. This is because, although some researchers do get money from their institutions because of their rating, most receive little or no benefit from the system.

The NRF has not yet responded officially to the Hesa reports and recommendations. But it has announced, through its website, that it will finance a new category of funding for all rated researchers — in which individuals will receive funding commensurate with their ratings — as soon as it can afford to do so. The indications are that it decided to do this long before the Hesa reports were submitted.

The NRF’s denial of evidence suggesting the rating system should be scrapped may reflect its own vested interest in the system. More astonishing is that the Hesa review committee, which is intended to represent the interests of the academic sector, has endorsed the system in defiance of the recommendations of its own review.

The committee, chaired by University of the Witwatersrand vice-chancellor Loyiso Nongxa, comprised mostly senior university administrators. Tellingly, one of the constituent reports in the Hesa review remarks that ”managers/administrators and researchers have different perceptions of the merit and impact of the rating system”.

Barring a few outsized egos, researchers would welcome change: the resistance is coming from university administrators.

Are they just too idle to assess staff performance properly? Is it that they dare not depart from criteria that, over the past quarter-century, have assumed a local canonical status? Or have they failed to escape the apartheid-era mentality of having to do things our own way on account of international isolation?

As long as their heads remain in the sand, research in South Africa will be the loser. Creative endeavour can only really flourish in a collegial environment — the antithesis of the current one in which researchers are, in the words of Stellenbosch chemist Andrew Crouch, ”graded like meat”. It will be a great pity if Hesa’s endorsement of the rating system once again kills off impetus to abandon it without delay.

Michael Cherry is professor in the Department of Botany and Zoology at Stellenbosch University and a correspondent for the journal Nature. He wrote this for the Science and Development Network news website, online at www.SciDev.Net