If ivermectin had dramatic results in combating Covid-19, it should perform well in any well-conducted, unbiased scientific study, but this is not the case. (Delwyn Verasamy/M&G)
From the start I was sceptical of claims about ivermectin, but hopeful that it could have some benefit in combating Covid-19. However, when I read the studies claiming to show benefits, most were so flawed that you really could not conclude anything.
Meanwhile, “meta analyses” proliferated. A meta analysis combines multiple studies to obtain a stronger result than a single study and is supposed to follow a careful methodology to avoid including flawed results. There are variants on this, but the general idea is to search for all relevant studies, apply objective criteria to decide which to include, and request missing details for authors. Only once you have a set of studies of good enough quality and with consistent types of results, can you combine them using robust statistical methods.
When I looked at the studies included in one of the earlier papers by Pierre Kory and others, most were very small and poorly described. Worryingly, the Kory paper did not show evidence of contacting the authors to clarify missing details. One of the biggest studies listed as a reference showed that ivermectin was ineffective and that some of the other remedies used as controls on other studies were harmful. A control should at worst be a placebo (no effect); if it is harmful, it’s not a useful basis for comparison.
Yet they did not include this study in their results, without explanation. They quote from a study of African countries that purports to show that ivermectin use for river blindness correlates with low incidence of Covid-19. This source admits that the results are the same, irrespective of ivermectin dose, and fails to note that the standard prophylaxis in these regions is one dose a year.
On the one hand, meta analyses using standard methods were showing inconclusive results after pruning studies with an inadequate standard of evidence. On the other hand, a cacophony of social media claims were backing poorly conducted meta analyses, including the Kory paper and an anonymous website. None of this is close to a standard for good science — transparency, repeatability, clear avoidance of conflicts of interest.
The problem with pre-prints
Yet it gets worse: a growing number of analyses of the underlying studies is showing that data could have been fabricated. This is, in part, a consequence of using papers that have not gone past the pre-print stage. Pre-print services generally carry almost anything submitted that is in their scope and is not so clearly bogus that a casual reader would pick that up.
Preprints are a way of getting results out without waiting for the slow process of peer review. With the demand for rapid results in a fast-developing health emergency, a lot more faith has been placed in pre-prints than usual. However, even pre-prints can and should make their data sources available for checking. Patient data can be de-individualised to protect privacy, but otherwise raw data and detailed methods should be available to allow data and results to be checked.
One criticism of ivermectin studies is that they rely too much on summarised data without checking the detail; even allowing for this, properly conducted meta analyses have generally found that there is insufficient information to draw conclusions. If ivermectin did indeed have the dramatic results claimed by some people, it should perform well in any well-conducted, unbiased study.
At the other end of the scale, vaccine hesitancy is fuelled by uninformed analysis of raw data. If a given number of people die within a week of being vaccinated, a correct analysis should look at how many people in a population that size die in the average week. This is how the adverse events reporting system is supposed to be used.
If the standard for safety of any vaccine is that you shouldn’t die within a week of a dose, I would get vaccinated every week and live forever. If you look at raw data, it can be very misleading. Particularly at early stages of vaccine roll-out that focused on older people, any number of conditions they would have suffered anyway would appear in the data: looking for anomalies requires knowledge of the normal rate of those ailments. When you do determine that there is a negative effect, it needs to be weighed against the disease risk.
A classic example is clots linked to the AstraZeneca and Johnson & Johnson vaccines. The incidence of these was low enough that it took careful study to show it was not coincidence. This was investigated and it turned out to be an unusual form of clotting — thrombocytopenia syndrome (TTS). Once it was identified, the correct treatments could be used. In any case, the risk of clotting is much higher with Covid-19 infection. For anyone with concerns about TTS, another vaccine could be used.
Those people who combine blind faith in ivermectin with vaccine hesitancy somehow are able, in one context, to argue that the summary is all that counts and, in another, that you can only make sense of the situation by studying the unprocessed raw data. That’s not how you do science: you can’t adapt your method to fit the result you want.