Listen to this article
The opinion polls all agreed that the election was impossibly close. Most of the final pre-election polls showed the Labour and Conservative parties tied. The extrapolation of the data into projected numbers of MPs suggested, throughout the campaign, that a hung parliament was almost inevitable.
That — to state the obvious — is not what happened and we pollsters should all be straightforward in acknowledging the fact.
The polls didn’t get everything wrong. Those in Scotland, predicting an implausibly sweeping Scottish National party landslide, proved to be very accurate.
But in England they underestimated the Conservative vote share by more than the margin of error; and they overestimated Labour support, albeit by a smaller amount. Instead of a virtual dead heat, the Conservatives finished about 6 per cent ahead.
This is not the first time that election polls have been wrong. In 1992 all the polls put Labour too high and the Conservatives too low. A thorough review was conducted, to understand in detail what had happened, and the methodology of voting polls was fundamentally changed as a result: at the next four general elections, the polls were right.
The polling industry will do the same this time: the British Polling Council has already announced that there will be a full and open review of the 2015 election polls so the lessons can be learnt.
There is some evidence that there was a very late swing to the Conservatives. If voters do not decide who to vote for until they are in the polling station, pre-election polls will obviously fail to capture that. Late swing was one of the causes of the polls being wrong in 1992 and it may turn out to be one of the major factors this time.
But, as in 1992, there are likely to be other questions. There are complex challenges in consistently selecting samples of 2,000 adults who are wholly representative of the political complexion of the entire country, and these challenges can change over time.
Voting polls, unlike any others, have to include questions to gauge whether the sample is politically representative as well as demographically representative of the electorate; weightings are applied to achieve this. These weights have shifted since the post-1992 methodology changes to take account of the advent of internet polling. One factor that the industry inquiry will need to reflect on is that in this election the telephone polls tended to present a somewhat different picture from the online polls.
We will also need to review the way that voting polls try to take account of each respondent’s likelihood to vote at all. Because most people feel a social responsibility to vote, they overstate their own probability of doing so. Polls ask likelihood to vote on a scale of 0-10 and weight voting intentions according to the answer.
But, on average, respondents put themselves at about 8.5 on that scale, implying an 85 per cent turnout — much too high. And if only about two-thirds do actually vote, which two-thirds that is makes a vast difference to the poll’s result. In the US, pollsters ask several questions about how likely people really are to vote, and Populus had already started to test similar approaches.
Britain’s polling industry has shown itself willing and able to learn and to innovate. The BPC, of which Populus is a founder member, was set up more than a decade ago on the principle of complete transparency. Accurately polling how people will vote has become more difficult and complex over time, and for that reason it is essential that pollsters all set out clearly how they conduct their polls and exactly what they do to the raw data to seek to produce accurate results. The review of the 2015 polls, and the lessons we must learn from them, will be conducted in the same spirit of openness, transparency, determination and rigour.
The writer is a co-founder of Populus and a former director of strategy in Downing Street