/ 4 January 2013

University rankings a flawed tool

Students at Wits University are protesting to express their dissatisfaction with university management.
Students at Wits University are protesting to express their dissatisfaction with university management.

How should we assess the quality and development of higher education ­institutions in South Africa? What is it that these institutions do that is socially valuable enough to justify receiving large sums of public money?

Is it the economic contribution of graduates, the societal importance of the research, other forms of contribution to society – providing a home for "public intellectuals" – or some particular combination of these?

With such considerations in mind, it is interesting to interrogate the increasing popularity of international university rankings as a means of either assessing university progress ("university x has done fantastically well in the past few years, climbing 20 places in the rankings") or setting a milestone for achievement ("our university mission is to be a top 50-ranked institution").

Although seductive in their ability to summarise institutional achievement in a single number, the many flaws of this approach suggest that it should, at best, be consigned a peripheral role in our determinations of institutional success.

To illustrate some of the problems, consider the Times Higher Education rankings, perhaps the most prominent of the growing number of published rankings.

This approach constructs a measure of institutional quality based on the two core academic activities of teaching and research. Teaching quality is measured by asking a sample of international academics what their impression is of the teaching quality at a given institution (15% of the overall institution score) and by using institutional information on the number of undergraduates per academic (4.5%), PhD awards per academic (6%) and the ratio of PhD graduates to bachelor degree graduates (2.25%).

Reputation survey
Peer perceptions, from what the Times ranking calls its "reputation survey", are also used to assess research quality, comprising 18% of the total score of an institution. In addition, a "research influence" measure, counting a whopping 30%, is constructed based on the number of citations the research of academics at the institution has received.

Other research-related measures are the average number of published papers per academic (6%), income from industry (2.5%), research income (6%) and total institutional income (2.25%) per staff member. The last interesting measure is of "internationalisation" (7.5%) – the higher the ratio of international staff, students and co-authors, the better.

Each of these measures has some technical complications, some of which, such as using the regional adjustments of citation scores, can lead to dubious changes in scores, but this is not the place to discuss such issues in detail. What is important to emphasise is that the chosen measures, and the weights attached to them, reflect a particular prioritisation of the possible activities of higher education institutions.

Is there any reason to believe they reflect what we want universities in South Africa to be emphasising? Does it really make sense to assess a privately funded university in a developed country against the same set of indicators as a publicly funded institution in a developing ­country? Are higher education objectives homogenous across contexts?

Where institutions are still developing, it may be important that an assessment reflects an aspirational level of quality and this in turn might require international comparison. Still, it remains hard to see how such international benchmarking could possibly use the same indicators with the same weighting for all institutions.

A well-known problem with measurement-based approaches is that there are many characteristics that may be important but are not amenable to quantification. This matters because a key policy lesson of the past half-century is that if coupled (directly or indirectly) with punishment or reward, even well-intentioned measurement can create severely distorted incentives.

Questionable
It may help to consider some specific issues. For a start, the aforementioned proxies for the quality of instruction or coursework are so crude that it is questionable whether they provide any useful information at all. As a past recipient of a Times Higher Education ranking questionnaire to elicit opinions on the teaching quality of institutions, I am doubly sceptical of this approach.

I gave up halfway when the survey required me to rate the teaching quality of other African economics departments that I did not even know existed!

Furthermore, the absence of any information on such "sample selection" – certain types of academics completing these surveys and others not – makes it impossible to determine possible biases in the final results.

In this context, it bears mentioning that, although often developed through consultation within the higher education sector, the leading ranking systems are proprietary and therefore, ironically, not subjected to the transparency or scrutiny that accompanies most academic work.

A simple way of obtaining further insight is to consider how an entity might climb a ranking system. To increase research scores, for instance, one could implement a programme to produce research-oriented young academics involving high-quality instruction, research training and international exposure in the relevant area.

This may cohere with social objectives, but it yields returns in only a decade or more, far too long a horizon for most academic managers and policymakers.

Citation rate
Instead, some institutions have tried to fast track their rankings by simply buying in foreign academics with a high publication rate. This increases the ratio of international staff, publication rates and (presumably) the average citation rate, as well as increasing positive impressions of the institution, which is all-important for the entirely subjective reputational measure.

Even better, in South Africa this way of gaming the system is enabled by the state's publication-based incentive system, which can in effect pay a professor's salary if the academic concerned produces six publications or more. All well and good, but what exactly has this achieved for the South African citizens who are the source of these funds? Potentially nothing: local academics have been crowded out of jobs, young academics – starting out with lower publication rates – are discarded or used primarily for teaching and taxpayer money is devoted to research with no substantive local connection.

One might add that, historically, few South African institutions have had the resources or allure to pursue this kind of strategy and that in itself may lead to unfair assessments of institutional progress.

This kind of disturbing prioritisation, induced by ranking and similar forms of assessment, has been occurring internationally for some time.

Stories abound of British academic institutions hiring academics with a number of forthcoming publications on lucrative one-year contracts for the year in which Britain's all-important Research Assessment Exercise takes place.

What does a score based on such manipulation tell prospective students, funders and policymakers, except that the institution is able and willing to play the system? It is hardly an answer to the questions we started with.

Flaws in ranking systems
The game to climb ranking systems is inherently zero-sum: the ascent of one institution must imply the descent of another. And a great deal of success comes down to superficial impressions, requiring that significant resources also be devoted to image management rather than the core activities of universities.

It is a bottomless pit into which governments can throw public money, transferred to higher education institutions at the cost of other social priorities and where the greatest rewards are likely to accrue to those who find themselves in a position to contribute to rankings.

One must sympathise with university managers: even if they recognise the glaring flaws in ranking systems, the fact that others such as policymakers, potential students and possible international collaborators think differently may be enough to cause sleepless nights and lead to decisions that compromise the integrity or social objectives of their institutions.

There is no easy solution, but a good start would be for higher education bodies and policymakers to move away from the too heavy emphasis on rankings in assessing institutional progress or quality.

If higher education policy and management is well thought out, international rankings ought to be a merely incidental indicator.

Seán Muller is a lecturer in the ­University of Cape Town's ­economics department