I just rolled a six-sided die a few times. I rolled: six, five, five, six, five. My question is: do you think the die is biased? One way to think about that question is to ask how likely I would be to roll only fives and sixes completely by chance. Not that likely: the odds are 242 to 1 against.
Before you conclude that I have crooked dice, let me mention something that had slipped my mind. As well as those fives and sixes, I also rolled four, three, three, three, two, two and one. But those results just didn’t seem that interesting, so I didn’t tell you about them. If that omission seems relevant, you’re beginning to appreciate the importance of the nerdy-sounding “trial register”.
Every day, across the world, researchers are conducting randomised controlled trials (RCTs). Some are rigorous, painstaking quests for truth, while others may cut a few corners in the search for a career-defining publication, or a licence for a new drug compound. But even if each individual trial was unimpeachable, the results would mean very little if there was a systematic bias in favour of a particular kind of result.
I reported my die-rolling with a bias towards high numbers. You might uncharitably suspect that an industry-sponsored drug trial would be more likely to see the light of day if the results show that the drug works, and you would be correct (a state of affairs ably summarised by Ben Goldacre in his new book Bad Pharma). Trials can also go missing because they end in chaos or disaster: such stories may not be worth an academic paper but they must be recorded. And trials go missing because the results are so boring that the researchers cannot bring themselves to write them up properly for publication.
As my die-rolling shows, unless we see every trial that was begun, we have a distorted picture of what is happening. There is probably only one way to achieve that goal: a compulsory register of trials. Researchers who conduct trials and abandon them or don’t publish, need to become pariahs. Systematic reviews of a particular field will be able to consult the registers and track down any unpublished trials.
A few years ago, the International Committee of Medical Journal Editors announced that the prestigious journals under its control would no longer publish research based on clinical trials unless those trials had been formally registered before they began. This had the very welcome effect of increasing, dramatically, the number of registered trials and the rate at which new trials were registered. Unfortunately, as Sylvain Mathieu and others explained in the Journal of the American Medical Association in 2009, more than half the research they examined flouted that rule and was published anyway. The threat not to publish seems to have been empty.
And what of economics, which has in recent years discovered the joys of randomised trials? There is good news: the American Economic Association is creating a trial register for trials in economics; it is due to be up and running next year. The register will be voluntary for now, but two leading practitioners, Esther Duflo of MIT and Dean Karlan of Yale, both told me they are hopeful that a strong social norm will form in favour of registering trials. We shall see.
Trial registries are a particular challenge in the social sciences. While an RCT in medicine is designed to test whether a specific treatment does or does not work, in the social sciences they are more likely to be used to search for interesting hypotheses. Social science RCTs are often partnerships between academics and practical organisations, and the trial may evolve over time in a way that a clinical trial would not.
All this complicates the business of registering the trial and then updating the entry as things change. It makes a trial registry harder to maintain. But it also makes the registry even more essential.
Tim Harford is the presenter of Radio 4’s ‘More or Less’