Universities are now almost totally run according to the various measures of their performance. They have sacrificed the freedom to make their own choices. Instead, they have to conform to the direction and choices embodied in all these external measures set by others: politicians, the media and management experts.
Nearly all the measurement tools that apply to higher education are designed to influence behaviour. And the higher the stakes – in reputation or money, or both – the more unreliable the results.
Take the United Kingdom’s National Student Survey (NSS). Each spring, finalists are asked a long list of questions about their “satisfaction”. The differences between the “scores” of universities are small, and just about statistically valid. But that intensifies the gaming frenzy. The NSS has produced a purdah period: universities take care to do nothing to upset students while they are responding to the NSS. Questioning is avoided during examinations, when students might be feeling less happy with their lot.
University planning departments today do little planning, at any rate of the long-term strategic variety. Instead they focus on manipulating the data they report – on employment rates, for example – because this data influences newspaper league tables.
The big recent midwinter event, of course, was the publication of the results of the Research Excellence Framework, the high-stakes competition to find the top dogs in research. But who came top? Well, curiously, everyone was a winner – if you believe the spin put on the results.
Every email from some universities is now disfigured by claims that they have world-beating research in subject(s) X, Y or Z. Every website trumpets that the institution is the top research university in Brigadoon, or West Barsetshire, or the whole of Wessex.
Entries were carefully crafted to produce the optimal results, callously excluding perfectly good researchers to fit the desired profiles, and potentially blighting future academic careers.
Is such a high fraction of research in UK universities really world-beating? If there had been such a sharp jump in A grades at A-levels, the cries of “dumbing down” would have been deafening.
As with the economy – where the UK is so much more “successful” than everyone else, especially those pesky Eurozoners – so with UK research, where we are top apart from the admirable Americans. Peer-review panels composed almost entirely of people from the UK have just proved it. It’s another example of the insular arrogance that combines insecurity with complacency.
Even the sensible and sceptical conclude that if you can’t beat them, you have to join them. So the best response that can be hoped for is a shrug of the shoulders plus a feeble attempt to make these gradings “better”.
Almost no one questions the underlying principle here that competition is a good thing. It empowers customers and drives up standards … doesn’t it? Who could object to that?
In fact, the cut-throat competition that is now forced on institutions almost certainly reduces student choice and compromises standards. Potential students are almost drowning under a deluge of info-marketing and managed data. Far from informing their choices, it is almost certainly disinforming them. The more information they have, the more disoriented and suspicious they become.
Standards are also being undermined as top-down management enforcement, driven by (increasingly tainted) benchmark data, crowds out that sense of professional responsibility and autonomy that is at the root of creativity in research as well as teaching. Performance is degenerating into skilful compliance.
Higher education is not like the Premier League. Chelsea beats Manchester City by scoring more goals (or the other way round), regardless of the run of play. There can be no doubt about when a goal has been scored, give or take some dodgy refereeing. It is, despite the multimillion-pound stakes, just a game.
But treating higher education as if it were a game corrupts. A good “student experience”, now an obligatory phrase, is not increased or diminished simply because it is ranked higher or lower in some crazy table. The real value of science and scholarship cannot be measured by whether those who undertake the underlying research have had lots of external grants (the bigger the better) or get published in highly cited journals or by top-flight university presses.
But who now dares say competition is a bad thing? – © Guardian News and Media 2015
Peter Scott is professor of higher education studies at University College London’s Institute of Education