A recent report (”Cosatu out of step”, August 19) suggesting that 5% of adult South Africans belong to a labour union has annoyed the Congress of South African Trade Unions (Cosatu) economist Neva Makgetla. She argued recently (”Lies, damn lies and small surveys”, August 30) that ”Afrobarometer’s data diverges significantly from that of the government’s Labour Force Survey, which found almost three million union members in March 2005. That comes to 10% of adults, and more than a third of formal workers.”
But these differences in measured membership are more apparent than real. The Mail & Guardian did not report the full responses to the question that Afrobarometer posed to a nationally representative sample of 2 400 adult South Africans. All in all, 5% reported being an ”active member” of a trade union, 4% were ”inactive members”, and another 1% described themselves as an ”official leader” in some capacity. Thus, five plus four plus one equals 10% who report belonging to some or other labour union.
In other words, both survey findings supported the conclusion that any political demands made by Cosatu, or any other trade union leadership, represent the views of a minority of a minority.
Makgetla’s reply raises the important issue of how much confidence analysts, policy-makers and news media can attach to the increasing amount of data supplied by South African researchers, especially those who place what she calls ”an odd reliance on small, private surveys”.
Statistics South Africa found exactly the same 10% union membership as Afrobarometer but with a sample size of more than 30 000 respondents, at least a dozen times larger than ours. Why is this interesting? Because it was the supposedly small Afrobarometer sample that Makgetla used to criticise its findings.
Yet NGOs such as Afrobarometer or the Institute for Justice and Reconciliation, or market research groups like Markinor and Research Surveys, or statutory tax-funded bodies such as the Human Sciences Research Council, regularly place what Makgetla calls ”an odd reliance” on interviews with anywhere from 1 200 to 3 600 people to gain insight into socio-political attitudes and behaviour of the South African public.
At the same time, state agencies like Statistics South Africa often interview as many as 30 000 households to assess things like income, expenditures and labour force participation. Which is more accurate?
Like Makgetla, who confidently writes that ”smaller surveys are more likely to be inaccurate”, many users of survey data assume that results based on larger samples are necessarily more accurate. But let us look more closely.
The mathematics behind survey research tell us that a sample of 2 400 respondents (such as the Afrobarometer survey in question) provides estimates of socio-political attitudes and behaviours of the entire population that are accurate on average to within 2%.
Yet very large samples of 30 000 barely narrow this ”confidence interval” or what is often called the ”sampling error,” reducing it to 0,6%.
Thus, the drop in sampling error is much smaller than the huge additional costs associated with packing an army of fieldworkers out of the door to visit an additional 28 000 people.
Why is this? Consider the simple analogy of making a large vat of soup. After a small number of spoonfuls, the cook can usually obtain a fairly reliable idea of what the soup tastes like. Tasting 10 times as many spoonfuls might improve your culinary precision — but not by much. This law of diminishing returns inherent in the mathematics of sampling theory is why very large samples such as the ones Makgetla praises are actually the exception rather than the rule in the world of survey research.
Why the need for large surveys, then? To return to the soup analogy, if the soup has not been mixed up very well, and we need to get separate tastes of the bottom, sides and top of the vat, we’ll need to taste a lot more soup.
Once you start adding up all these groups, we are talking about very large sample sizes. But our ability to speak with precision about national trends (like national union membership) hardly improves at all.
So, the real question is what we want to know, not how many people we can count. If the consumers of large government-sponsored household surveys need detailed comparisons of, say, changes in job-seeking strategies of coloured versus black men aged 16 to 21 in Cape Town townships, then the money is well spent on mega-surveys.
But if we are more interested in looking at national trends and possibly drilling down to compare, say, just men versus women, or only urban versus rural, random samples of 2 400 and even as few as 1 000 are perfectly fine. By all means, let us learn as much as we can by comparing alternative survey results and casting a critical eye on a whole range of methodological issues. But please, let us not throw away the tremendous advantages in efficiency that sampling provides us simply due to an obsession with size.
Bob Mattes directs the Democracy in Africa Research Unit at the Centre for Social Science Research at the University of Cape Town, and is co-author of the book Public Opinion, Democracy and Market Reform in Africa (Cambridge University Press). See www.afrobarometer.org