Lies, damn lies and medical statistics

Listen to this article

00:00
00:00

I love statistics so much I recently took a course on the subject. Disappointingly, one of the conclusions I reached was that I am never going to be as good at statistics as I would like (I am 99.9 per cent sure of this).

Let me share my enthusiasm, though. Medical statistics have helped us to work out the best treatments for HIV and tuberculosis. They have revealed the link between cigarettes and lung cancer and shown us that childhood immunisations are safe. What’s more, they can reveal what is nonsense, hype or exaggeration. A proper understanding of statistics offers protection in the face of unscientific anecdotes being used to make a case for prescribing treatment.

I particularly love relevant and useful statistics. The following statistic is taken from the West of Scotland Coronary Prevention Study trial. This shows that you have to treat 107 men at high risk of heart disease for 4.6 years with statin drugs in order to prevent one death from any cause. This trial was a large, randomised, controlled trial investigating the ability of cholesterol-lowering drugs, statins, to reduce cardiovascular death. The statistic, called “the number needed to treat”, gives a clear perspective on the chances of potential benefit. By contrast, stating that the tablets will save hundreds of lives a year does not help because we need to know what the chances are of that life saved being ours.

I am not so keen on unhelpful statistics. For example, it could be said that the risk of a blood clot is multiplied by three when a woman is taking one type of combined oral contraceptive pill. This “relative risk” sounds quite alarming. However, the risk of a blood clot when not taking oral contraceptives is 5 in 100,000 women a year. The “absolute risk” therefore is three times this figure, 15 per 100,000 women a year, which is still quite small.

In 1995, the publication of a paper on the risk blood clots when taking a particular type of oral contraceptives caused a “pill scare”. Many women stopped taking their pills with predictable consequences.

It is clear to me that statistics, when used carefully, can make our certainties crumble and our supposedly fabulous treatments fail. Doctors and patients should not ignore this but use it to their advantage. Yet it appears that statistics are still largely unloved. Presumably this is because they are, in essence, hard sums. A study in the British Medical Journal earlier this year highlighted this. The researchers took a group of obstetricians, midwives and patients and gave them information on a hypothetical but close-to-real-life scenario. It concerned the probability of a positive screening test, carried out to assess the foetal risk of Down’s syndrome, being correct or not.

The groups were asked: “A blood test screens pregnant women for babies with Down’s syndrome. The test is a very good one but not perfect. Roughly 1 per cent of babies have Down’s syndrome. If the baby has Down’s syndrome, there is a 90 per cent chance that the result will be positive. If the baby is unaffected, there is still a 1 per cent chance that the result will be positive. A pregnant woman has been tested and the result is positive. What is the chance that her baby actually has Down’s syndrome?”

The answer is 47.6 per cent. If you got it wrong, you are just like most people involved in this study: only 34 per cent of obstetricians, no midwives, and 9 per cent of patients got it right.

Given our track record of supposed breakthroughs and the overselling of medical treatments, it would make far more sense to get the interpretation of statistics right from the start. We need the help of statisticians to make things clearer. Those who are involved in medical research should be flying the flag and promoting sensible approaches. How else can we make informed choices about what interventions to accept?

As a minimum, medical journals reporting clinical trials should be obliged to provide a “community” as well as a “scientific” abstract. Instead of complex numbers that are difficult to interpret, we should be able to find the answers to questions such as what this means to me/my mother/my child. These statistics would contain meaningful data for doctors, patients and journalists, and this way, perhaps, it wouldn’t only be me who loves them.

Further reading

Heart Protection Study

Bandolier study on number needed to treat with statins

British Medical Journal on cost-effectiveness of statins on low risk people

BMJ study on interpreting screening results

Margaret McCartney is a GP in Glasgow

margaret.mccartney@ft.com

More columns and further reading at www.ft.com/mccartney

Copyright The Financial Times Limited 2017. All rights reserved. You may share using our article tools. Please don't copy articles from FT.com and redistribute by email or post to the web.