/ 6 April 2011

Universities ‘not a numbers game’

Everyone remembers “lies, damned lies and statistics” attributed, perhaps wrongly, to Disraeli. Now perhaps we should expand it — “lies, damned lies, statistics and metrics”.

Modern higher education systems are increasingly driven by numbers — management information, liquidity ratios, key performance indicators, workload models, student (and staff) satisfaction scores, research assessment grades, citation indices, media league tables … Everything, it seems, can be reduced to a number.

But can it — or should it? First, these numbers are a mixture of the good, the bad and the mad. Second, there may be something deeply incongruous about universities that are designed to ask questions, to engage in critical inquiry and (however cliched) push back the frontiers of knowledge reducing everything to uncomplicated digits.

Of course, some numbers are good — and essential. Universities need the best possible information on staff costs, income projections and cash positions. They need to know whether their students are progressing, completing their courses and finding jobs. They need to make sure their research is sustainable. Good management information is essential to ensure universities operate effectively — as business organisations.

But there is, or should be, always room for questions even in these operational matters. For example, squeezing down “completion” rates may be in the best interests of the institution, because it maximises its income and improves its league table position. But it may not be in the best interests of individual students weighed down by caring responsibilities or going through “bad patches” in their lives.

Or, to take another example, research offices and finance departments may insist that all research (or as much as possible) is fully funded — in other words, it covers its full economic costs. But to turn down projects of significant academic potential just because they are not fully funded is plain daft. Another saying comes to mind — “the operation was a complete success, but sadly the patient died”. The point of research is not to cover its costs or make a profit; it is to improve our understanding of the world.

Some metrics have become a sad necessity. Student satisfaction must be tested — regardless of whether they pay high fees. The National Student Survey, a bit like democracy, is a bad way of doing it, but probably the best we’ve got. But its limitations need to be understood.

For example, by far the major determinant of institutions’ relative “performance” is their subject mix. The other big one, of course, is student mix — in terms of class, gender and ethnicity.

Research scores need to be treated with similar caution. The steep cliff-like “grades” of earlier research assessment exercises (RAE) were indefensibly crude. Last time, the substitution of “profiles”, covering the very best to the very worst research, was a step in the right direction.

The new research excellence framework (Ref) brings “impact” into the picture, which is right, although measuring “impact” through case studies will be a fraught (and inaccurate?) business. At least the Ref has resisted the lure of “metrics” — for the moment.

There are two problems with the proliferation of metrics. First, they inevitably get translated into “winners” and “losers”. So they can be the enemies of diversity because they translate legitimate differences — in student mix, research priorities and the rest — into illegitimate hierarchies. My guess is that rankings — and the measurements that have fed them — have done far more to destroy diversity than, for example, the decision two decades ago to make polytechnics universities.

Second, they can encourage corruption. Institutions have to become adept at game-playing, often in the worst interests of their students — or of junior researchers. But a more serious form is the betrayal of what higher education is for.

Higher education is not a competitive sport, like football. What I learned at university is not diminished because others have learned things, too (though my degree may have become a less valuable positional good in the labour market). My research is not diminished, nor improved, because others have carried out “better” or “worse” research.

We can all be winners, if we stick to “the good, the true and the beautiful” (as opposed to “lies, damned lies, statistics, metrics…”).

But we will all be losers if we end up, according to another cliche, knowing the price of everything and the value of nothing. — Guardian News & Media 2011

Peter Scott is professor of higher education studies at the Institute of Education