/ 22 July 2008

Expand the researcher rating system

Is the rating of researchers a policy linked to apartheid-era mentality and a response to international isolation, as Michael Cherry has suggested (Higher Learning, June 13 2008)? Or does it have a role to play in raising the level of scholarly endeavour in South Africa? I would argue the latter approach should be explored.

For an academic to secure a rating from the National Research Foundation (NRF) his or her research outputs are evaluated by a panel of at least six expert referees, many of them from overseas. The emphasis, therefore, is on peer review and the focus is unashamedly international. In every other sphere of contemporary life — whether in business, or sport or health — South Africa compares itself with international benchmarks. Why should research be any different?

Cherry has said that creative endeavour can only really flourish in a collegial environment. I disagree. So too does my colleague, academic Tim Noakes, who argues: ”Too many people have a very large comfort zone with excessive rewards that blunt an ambition which is too easily satisfied.”

The principle of competition is one of the key reasons that America’s system of higher education is the best in the world. A journal such as Nature, for which Cherry is a correspondent, has a high international standing, precisely because it rejects more than 90% of all articles submitted and not because of its collegial attitude to publishing.

Another criticism of Cherry’s is that abuses are common and the ratings are highly subjective. I have read all the reports that he cites and none have provided any evidence to support the conclusion that ratings are commonly abused. While it is true that peer review must, of necessity, rely on subjective judgements, the reviewers have full access to the candidate’s portfolio of research outputs.

In one of the only serious scientific analyses of the NRF rating system, Barry Lovegrove and Steven Johnson from the University of KwaZulu-Natal recently compared the ratings of 163 botanists and zoologists with a well-recognised bibliometric score, the h-index. There was a significant positive correlation between the two measures, although the variance explained by the scores was relatively low, suggesting that a few of the researchers may have had a rating that was either too high or too low.

For those researchers who are unhappy with their ratings, it is relevant that a person’s rating is not static. Researchers must renew their ratings every five years and I am aware of two former deans at the University of Cape Town who managed to upgrade their ratings from B to A during their deanships, while another, over a 20-year period, improved from C to B to A. Clearly, they did not feel they were being ”graded like meat” nor were they disillusioned with the system.

As Noakes has challenged us: ”To be the best in the world requires that we ascend to rather higher levels of ambition, intellectual curiosity, mental preparation, and the capacity for persistently deferred gratification.” Perhaps this is one of the reasons why just 10% of the academics in South Africa’s higher education sector have secured a rating. It’s too much like hard work.

In fact, it would not be surprising if only 10% of academics contribute to scholarly research outputs on an annual basis. It may also be hard for an academic to accept a rating he or she believes does not represent their true standing as a researcher.

One of the greatest challenges for the NRF and Higher Education South Africa (Hesa), in supporting and sustaining the rating system, is to provide a reason why individual researchers should strive to secure a rating. I agree with Cherry that the ratings serve little useful function if their primary purpose is for university administrators to score points against one another.

The NRF needs to ensure that researchers receive an annual incentive linked to their rating (a system that was abandoned by the Foundation for Research Development, predecessor to the NRF, in the mid-1990s).

This would allay the fears researchers may harbour that their ratings benefit the institution but not the individual. It would appear that such a policy has recently been approved, with recommended amounts of R100 000, R80 000 and R40 000 for A, B and C ratings respectively. This is a step in the right direction.

Another positive step, which I have advocated in a recent issue of the South African Journal of Science, would be to use the NRF rating system as an adjunct to the formula for determining the amount of research funding awarded to individual institutions.

At the moment, there is significantly more research funding available on the ”supply side”, provided by the Department of Education (DoE), compared to the ”demand side”, provided by the department of science and technology and administered by the NRF. The DoE allocated R1,52-billion in research funding in 2008, more than three times the funds available from the NRF.

This raises the question: Is such an approach ideal for improving research productivity of South Africa? I don’t think so. One of the weaknesses of the current ”supply side” model is that it encourages the pursuit of mediocrity. This is where the NRF ratings have a role to play.

Instead of an emphasis on the number of publications (the basis for the DoE’s subsidy), where quality is ignored, our focus should rather be on a system that inspires a level of scholarship which can withstand the scrutiny of an international audience. If South Africa’s institutions of higher learning are to raise the level of their game, and compete on a global stage, the NRF rating system should not be abandoned. It should be expanded.

Professor Kit Vaughan is deputy dean for research, faculty of health sciences, University of Cape Town. He writes in his personal capacity