/ 11 September 2015

The varsity ranking system is broken

The Varsity Ranking System Is Broken

Times Higher Education (THE) has just released a version of its university ranking system specifically designed for Africa. This differs somewhat in the criteria being measured and the weighting apportioned to each, but it is underpinned by the same philosophy as international systems such as the Shanghai Academic Ranking of World Universities and the THE international rankings.

Rankings are bad science because several metrics are added together and then averaged. Can we really average the “number of PhDs” with the “number of citations” and so on and come to a meaningful measure of quality? You cannot take the average of a number of apples and oranges to determine their quality.

We also need to question the inclusion of tomatoes in this fruit calculation. Some of these tomatoes, such as reputation surveys, which count for as much as 25%, are contentious for good scientific reasons.

It’s not just that reputation surveys are crude, blunt measures, but many American universities also hire spin doctors, insist on vetting anything their staff say to the press and suppress student discontent in order to improve their score.

Furthermore, the calculation of publications and citations ignores the reality that things such as a “community-based malaria prevention initiative”, for example, will have sparse readership in the current Eurocentric model of academic publishing, and publications in languages such as French and Portuguese, or African languages, are unlikely to be taken into account.

The measures used are not the only problematic aspect of the calculation: the weighting of these items within the calculation is also subjective. If, for example, we decided that the international-to-domestic staff percentage should be 4% instead of 2.5%, the order of the universities would shift. Does this indicate that the quality has changed?

Rankings are bad science because they are always only an approximation of quality, but they are presented in the media as, and are understood by the public to be, the real deal. These calculations present quality as if it is something that can be objectively and neutrally measured. But perhaps most worrying of all is that they treat quality as something that is generic and without context.

In South Africa, universities are rightly under pressure to ensure they have a student body that is more representative of our nation’s race and socioeconomic demographics. Given that the legacy of apartheid education remains in our schools, attempts to widen access have critical financial and pedagogical implications for universities. But such issues are of no interest to ranking systems.

The transformation agenda doesn’t stop at the student body; it includes calls for changes in staff profile and for curriculum transformation. Again, ranking systems care not a jot about such issues and so institutions that invest time and energy on these may well be doing so at the expense of issues measured for rankings.

Times Higher Education emphasises that the metrics they use for their calculations are not appropriate for all institutional missions and this should be taken into account. But by whom? Publishing the rankings sells papers and influences public opinion about our universities. Criticism of the extent to which the calculations reflect the institution’s role in society are of far less interest.

Most ranking systems, besides disregarding the role our universities play in the nation’s transformation, also fail to take into account the issue of value for money. Institutions that are able to charge high student fees and are sitting on millions of apartheid-era investments will naturally fare better than those unshackling themselves from decades of disadvantage.

Value for money can also be considered from the taxpayers’ perspective – ranking systems fail to take into account the relationship between the number of graduates and research produced by the university and the state’s financial investment.

If the rankings are such bad science, you may wonder why they have such power. In South Africa, some universities, such as Rhodes Univer­sity, have made a principled decision not to participate in university ranking systems. These universities do not provide the organisations with the data for their dubious calculations, and so the calculations are made only on publicly available data.

But as the popularity of rankings grows these universities are being put under increasing pressure to play this dangerous game and will no doubt be unable to hold out much longer.

Rankings are not just bad science, they also have very real consequences. Increasingly, based on rankings, parents select universities for their children, funders donate money and international partners choose who to collaborate with.

So universities have started to take them very seriously. Several South African universities now have dedicated staff working with ranking organisations to provide them with the data they require. I’m sure it is only a matter of time before our universities begin to hire the ratings consultants so plentiful in the American system.

More worrying is that institutional decisions are being made on the basis of what will push up the rankings and not on the basis of what will improve the university’s quality in its own context and for its own students. And thus South African universities fail to see themselves as part of a publicly funded sector working together to meet many demands. Instead, we play the neoliberal rankings game, which turns the university into a branded company guarding itself against the successes of its competitors.

Sioux McKenna is an associate  professor and a higher education studies doctoral co-ordinator in the Centre for Higher Education Research, Teaching and Learning at Rhodes University