The enterprise we call "science" is probably the largest co-operative effort ever undertaken by humankind. It is the engine that in the space of a few thousand years has propelled humankind into an existential space that its previous 200 000-year evolution gave few signs of being possible.
It matters a lot to all of us. That is why the hidden levers that make science work are important to understand. One set of such levers is about to undergo a sea change. It concerns the way in which scientific discoveries are reliably and publicly documented, mostly in the form of journal articles, constituting what we call the "scientific literature".
Journals are convenient processors and containers of contributions in particular areas of knowledge. Mainly commercial in nature, they are a highly profitable industry with an unusual production model — authors produce the raw material (research-based articles) for free, the same community of authors carries out labour-intense but unpaid quality control, and the publishers (while admittedly adding a variable but definite amount of value) laugh all the way to the bank.
The coming changes affect the ways in which the value and impact of individual contributions will be measured, the access that all scientists and scholars will henceforth have to the "scientific literature" and a general levelling of the global playing fields.
The first of the sea changes has to do with the determination of the role and value of journals and the articles they contain.
Nearly everyone knows Garfield, the lovable cartoon cat who became for a while the world's most popular comic strip. Few are aware that another Garfield, a human first-named Eugene, shaped virtually the entire practice of modern science through two successive and related innovations.
The rapid growth of scientific literature in the mid-20th century made it difficult for individual scientists to keep track of what was being published in an ever-increasing number of often specialised journals. Garfield first produced a handy tool for doing precisely this, easily and efficiently, by publishing a monthly magazine called Current Contents, which showed the contents pages (article titles, author names and so-called metadata such as date, volume and page numbers) of a large number of selected journals grouped in broad fields. This made possible the quick perusal by busy scientists of much of what was happening in their areas of interest.
A serious issue for such an enterprise was deciding which of hundreds of similar journals to include and which to leave out.
Garfield came up with his second and eventually far more influential idea for identifying what he called the "core journals" collectively responsible for reporting most (say 80%) of significant scientific progress by looking in detail at the citations or bibliographic items included in every article in every journal in his collection.
To see how Garfield's citation analysis works, one has first to understand how science mainly moves forward, which is through the publication of peer-reviewed articles in the still-expanding global journal literature.
Without formal codification, a basic set of rules is more or less universally followed by journal editors and publishers across the globe, the most important being that published work should be original, that methods should be fully described, and that all relevant results should be honestly reported and subjected to statistical analysis of the data.
A key requirement is for authors to list in every article all information drawn from previous publications in a bibliography at the end of each paper. These citations either refer to a point of departure for the new work, or a method that has been used or adapted, or an idea already explored, or supportive or contradictory findings made by others.
Garfield saw that a powerful method for assessing the impact of a particular paper on later work, not only in its own narrow context but also more broadly, was the measurement of the number of times the earlier paper was cited in all later papers in the literature.
To standardise the measurement he chose a "window" of two succeeding years. He averaged the citation rates of all papers in particular journals and called these the "impact factors" of the journals concerned.
These "impact factors" have for many years been considered the best possible proxy indicator of the general importance and rank of scientific journals. Journals with high impact factors were henceforth included in a commercially available journal citation index.
Garfield's big idea — propagated through the products of his Institute for Scientific Information (ISI) — rapidly spread its influence through the entire world.
It has become a principal determinant of phenomena such as the pressure to "publish or perish in ISI-indexed journals", staff promotions at universities, public funding of projects and people, growth of capacity in national research and development systems, university rankings and intense commercial competition between publishers.
In short, Garfield rules from the west coast of California to the east coast of China. It is a truly remarkable story, much too little understood by outsiders, but a well-recognised "elephant in the room" in every laboratory or academic department.
Now, after years of "impact factor" rule over the scientific world, a call for a rethink has come from no less an authority than the editor of one of the world's leading journals, Science, after a declaration in San Francisco by many leading lights.
The idea that the quality of a particular article should be judged primarily by the "company that it keeps" in the pages of a journal with a particular "impact factor" is being rejected for a host of reasons — among them the fact that the factor is an average of many values rather than a measurement of the value or impact of a particular article; the serious distortions in the publishing system that the hunt for higher "impact factors" has generated; and the heavy bias against regional journals — especially those of developing countries — that the North-dominated ISI system has maintained.
The San Francisco Declaration on Research Assessment insists that, henceforth, "article-level metrics" should be used to judge the value of each article — that is, its own merit, whether judged by its own citation record, or preferably by a combination of criteria, of which the chief one should be its intrinsic merits when read by knowledgeable peers.
Although a host of vested interests will undoubtedly fight like tigers to retain the present rating system primarily based on "impact factors", the cogency of the counter-arguments and the impact of other new developments in the journal publishing world (some of them described below) are likely to bring about pervasive change.
The second, no longer creeping revolution has to do with the fundamental issue of access to the scientific literature.
The assumption of the whole system has always been that every participant is actually able to read what's published: in fact, access has been increasingly restricted to those who can pay for it in a climate of high inflation caused partly by monopolistic market conditions and partly by the sheer growth in the content of the journals themselves.
For half a century, commercial journal publishers have rigidly enforced their "pay to read" model, based on expensive subscriptions paid by individual scientists or institutional libraries. Budgetary limits that led to cancellations of subscriptions were countered by the offering of "bundled", all-or-nothing subscriptions in which choice was forfeited and wholly unsustainable conditions induced.
These developments were particularly deleterious for scientists working in developing countries, but by no means only for them (even wealthy Harvard University began to squeal). The pay-to-read model was also carefully retained in the otherwise promising new context of large-scale internet access to the contents of journal pages.
The governments of some countries, such as Brazil, Pakistan and Chile, have tried to lower the national burden of heavy journal subscription costs by subjecting the largest multinational publishers to direct bargaining on behalf of all their accredited public institutions.
This has brought benefits (now also being sought in South Africa as an outcome of an Academy of Science of South Africa [Assaf] study) but remains a pay-to-read model with its hyperinflationary costs and monopolistic character.
The response of the scientific community itself has been the movement for "open access" to "electronic" journals (journals that can be read online).
This has taken three forms: starting up new journals that are free to read online (but have to be paid for somehow, usually by sponsors); depositing essentially similar but not final versions of commercially published articles in institutional libraries (freely accessible "repositories"); or open-access journals based on a new business model by which authors, their institutions or their funders pay (usually heavy) "article processing charges" to cover the costs of publishing each accepted article plus an (as yet unregulated) profit mark-up.
Many people believe that the third of these models will be dominant in future; a major United Kingdom commission has recommended it as national policy, with huge cost implications in the form of increased research funding to pay for the article charges.
Analysts don't seem to notice that the new charges for processing articles would simply be replacing the considerable costs currently met by library journal budgets, funded through different channels.
What matters to the huge global community of users of the scientific literature is that the new model could create a kind of researchers' paradise of free and immediate access to any and every article of interest.
Publishing high-quality journals does involve costs despite the free work of researcher authors and peer reviewers; the industry has been quick to see that the new open- access business model again lacks a proper market mechanism, in that the people who will use the service will not be the people who will pay for it — it will remain an inflationary "pay as much as we ask in order to publish your work" model.
The government authorities who favour the new system in turn argue that competition between rival journals in similar fields will arise in good time and produce effective price regulation.
The business of changing the present highly profitable and well-entrenched commercial subscription system to a new model (or mixture of models) is likely to be very messy, but momentum clearly favours change. Assaf is currently scrutinising how such change will affect our local publishing system.
The third major area of current activity is the levelling of the playing fields for researchers working in developing countries. The ISI index of the so-called core literature is heavily dominated by journals in the United States and Europe. Five years ago the total number of indexed African journals was 25, out of nearly 10 000; all but two of these were South African.
A 2006 Assaf study of South African journal publications demonstrated the importance of indigenous journal publishing activity and recommended strengthening local journals immediately through the adoption of a national code of best practice, the formation of a national editors' forum, regular discipline-grouped peer review of the journals themselves and greater international involvement.
In 2009 a decision was made by Assaf, supported by the government, that South Africa should join the electronic open-access platform called SciELO (Scientific Electronic Library Online), which has been developed in South America under the leadership of Brazil. This is not only a fully effective journal citation index with built-in quality assurance, but also a rapidly growing, variously subsidised open-access publisher that does not charge for articles.
The South African collection has been certified by the SciELO Global Network and was launched in Pretoria last week, at a formal event opened by Minister of Science and Technology Derek Hanekom.
The pressure of these and many other developments on the ISI's new owners, Thomson Reuters Scientific, has already resulted in the addition of 1 500 so-called regional journals to the index (the number of South Africa's number of indexed titles trebled), and subsequently the whole of SciELO is being added to the parent portal, now called the Web of Knowledge.
Taken together it seems reasonably likely that the situation of researchers in developing countries like our own will be improved by all three of the changes described above. We should welcome the emphasis now to be placed in value judgments on the actual merits of articles, wherever they are published. We should benefit from instant and free access to every article published everywhere. And we should support the decentralisation of the global journal publishing system.
Professor Wieland Gevers was formerly a senior deputy vice-chancellor at the University of Cape Town and president of the Academy of Science of South Africa