Should economics be more like medicine? I don’t mean that economists should be more like doctors – I’ve met a few doctors – but that economists should learn from the relationship that medical practice has with medical evidence.
Medicine, like economics, deals with complex systems that are still not well understood; like economics, it has its share of quacks; but unlike economics, medicine has swallowed many of its ethical qualms about running controlled experiments in difficult circumstances.
Randomised controlled trials are now catching on in economics, especially development economics. (Such a trial would involve, for instance, approving loans to a randomly chosen subset of loan applicants.) Two new books, Poor Economics and More Than Good Intentions, each by leading practitioners of randomised trials, explain the consequent discoveries.
Clearly such trials have their limits, but I’m a big fan of the approach. However, it would be a great shame if economists learned nothing more from doctors than to use randomisation.
One lesson that has emerged all too slowly from medical practice is the need for trial registries, in which researchers give notice that a clinical trial is about to begin, noting exactly what they will do.
Trial registries sound like a pernickety piece of bureaucracy. In fact, they could hardly be more important. When analysing any statistical finding, researchers must allow that sometimes remarkable patterns emerge by chance. Imagine that there are 20 researchers, each investigating whether mint humbugs cure cancer. Purely by happenstance, we’d expect one of the researchers to find evidence that they do. She’ll approach a medical journal and get her fascinating results published. The other 19 researchers may not bother at all – or, realising that their research is destined to be published in The Journal of Uninteresting Results, they will drag their heels.
In short, published research is systematically biased in favour of striking results that may be coincidence. The trial registry matters because later researchers into the anti-carcinogenic properties of humbugs can take all the non-results into account.
Dean Karlan, a Yale economist, co-author of More Than Good Intentions, and founder of Innovations for Poverty Action, which co-ordinates and evaluates development projects in poor countries, argues that trial registries are harder to design in social science than in medicine. Researchers cannot control a project as tightly as clinicians can – they may find that the project they are evaluating is changed halfway through.
High-quality empirical research is not just a matter of using tools such as randomised trials and trial registries – it’s about the entire research culture. A simple example: if academic careers are in thrall to the number of articles published in the top journals, and if the top journals are not interested in publishing boring-sounding replications of earlier research, then these replications will not be attempted. Yet replication is a foundation of experimental science.
Professor Jonathan Shepherd, a clinician at Cardiff University, points out that the culture of evidence permeates medicine. Doctors are trained in university hospitals; their professors are themselves practising doctors, and their research agenda is driven by their needs as medical practitioners.
Meanwhile their pupils, thoroughly indoctrinated as to the value of medical evidence, read about new research in the British Medical Journal every week. In short, in medicine, academic evidence and everyday practice are intertwined. No doubt this symbiotic relationship is less than perfect in the real world; nevertheless it is something economists would do well to emulate.
Tim Harford’s latest book is ‘Dear Undercover Economist’ (Little, Brown)