Get a shot of weekend inspiration with the best in life, arts and culture. Delivered every Saturday morning.
The backlash against randomised trials in policy has begun. Randomised controlled trials (RCTs) are widely accepted as the foundation for evidence-based medicine. Yet a decade ago, they were extremely rare in other contexts such as economics, criminal justice or social policy. That is changing.
In the UK, Downing Street’s newly privatised Behavioural Insight Team has made it cool to test new ideas for conducting policy by running experiments in which many thousands of participants receive various treatments at random. The Education Endowment Foundation, set up with £125m of UK government money, has begun 59 RCTs involving 2,300 schools. In the aid industry, RCTs have been popularised by MIT’s Poverty Action Lab, which celebrated its 10th anniversary last summer – one estimate is that 500 RCTs are under way in the field of education policy alone.
With such a dramatic expansion of the use of randomised trials, it’s only right that we ask some hard questions about how they are being used. The World Bank’s development impact blog has been hosting a debate about the ethics of these trials; they have been criticised in The New York Times and in an academic article by economists Steve Ziliak and Edward Teather-Posadas.
Objections to the idea of randomisation aren’t new. The great epidemiologist Archie Cochrane once ran an RCT of coronary care units, with the alternative treatment being care at home. He was vigorously attacked by cardiologists: how could he justify randomly denying treatment to patients? The counter argument is simple: how could we justify prescribing treatments without knowing whether or not they work?
Yet that should not give carte blanche for evaluators to do whatever they like. Hanging in the background of this debate are awful abuses such as the “Tuskegee Study of Untreated Syphilis in the Negro Male”, which began in 1932. Researchers went to extraordinary lengths to ensure 400 African-American men with syphilis went untreated, although a proven treatment was available from 1947. When the experiment ended in 1972, many men were dead, 40 wives had been infected and 19 children with congenital syphilis had been born.
The Tuskegee study was not a randomised trial, but it demonstrates the perils of treating patients not as human beings but as means to some glorious end. This topic is rightly sensitive in development aid, as there is a clear power imbalance between the agencies who pay for new interventions and the poverty-stricken citizens on the receiving end.
In a perfect world, everyone involved in a trial would give informed consent, and everyone in the control group would receive the best available alternative to the approach being tested. (These are the basic guidelines laid out for medical trials by the World Medical Association’s “Helsinki” declaration.)
Yet compromises are common. Dean Karlan is professor of economics at Yale and founder of Innovations for Poverty Action, which evaluates development projects using randomisation. He points out that telling participants too much about the trial destroys the validity of the results by changing everyone’s behaviour.
Then there is the question of who consents. Camilla Nevill of the Education Endowment Foundation says that trials are often agreed to and conducted by schools. Trying to persuade every parent to agree explicitly to the trial “decimates” the number of participants, she says.
Is this ethically troubling? At first glance, yes. But there is a risk of a double standard. Without the EEF funding, some schools would adopt the new teaching approach anyway. It is only when a researcher proposes a meaningful evaluation that suddenly there is talk of informed consent.
Ben Goldacre, an epidemiologist and author of Bad Pharma, says “it’s reasonable to hold researchers to a higher standard” if only to protect the reputation of rigorous research. But how high a standard is high enough?
Steve Ziliak, a critic of RCTs, complains about one conducted in China in which some visually-impaired children were given glasses while others received nothing. The case against the trial is that we no more need a randomised trial of spectacles than we need a randomised trial of the parachute.
The case for the defence is that we know that spectacles work but we don’t know how important it might be to pay for spectacles rather than, say, textbooks or vitamin supplements. None of these children was in line to receive glasses anyway, so what harm have the researchers inflicted?
I should leave the final word to Archie Cochrane. In his trial of coronary care units, run in the teeth of vehement opposition, early results suggested that home care was at the time safer than hospital care. Mischievously, Cochrane swapped the results round, giving the cardiologists the (false) message that their hospitals were best all along.
“They were vociferous in their abuse,” he later wrote, and demanded that the “unethical” trial stop immediately. He then revealed the truth and challenged the cardiologists to close down their own hospital units without delay. “There was dead silence.”
The world often surprises even the experts. When considering an intervention that might profoundly affect people’s lives, if there is one thing more unethical than running a randomised trial, it’s not running the trial.
Twitter: @TimHarford; Tim Harford’s latest book is ‘The Undercover Economist Strikes Back’
To comment on this article please post below, or email email@example.com
Get alerts on Life & Arts when a new story is published