The British Polling Council has set up an independent inquiry into why the final election opinion polls were so inaccurate after pollsters and forecasters dramatically understated the Conservative share of the vote.
Patrick Sturgis, professor of research methodology at the National Centre for Research Methods will chair the inquiry, the council said in a statement on Friday.
“The final opinion polls before the election were clearly not as accurate as we would like, and the fact that all the pollsters underestimated the Conservative lead over Labour suggests that the methods that were used should be subject to careful, independent investigation,” it said
Of the final 11 polls released, 10 had the gap between Labour and the Tories at 1 per cent, thus pointing to a hung parliament.
In the event, the Conservatives won 36.9 per cent of the total national vote share and Labour 30.5 per cent.
On Friday morning, as the results were still coming in, Stephan Shakespeare, chief executive of YouGov, tweeted: “A terrible night for us pollsters. I apologise for a poor performance. We need to find out why.”
Rival survey firm Populus tweeted: “Election results raise serious issues for all pollsters. We will look at our methods and have urged the British Polling Council to set up a review.”
A BBC exit poll on voting day, however, was more accurate. It predicted that the Tories would get 316 seats, Labour 270 and that the Scottish National party would take all but one seat in Scotland — a forecast that shocked professional pundits and party activists alike.
Paddy Ashdown, the former Liberal Democrat leader, pledged to eat his hat if its forecast of 10 Lib Dem MPs was accurate. In the event it turned out to be a slight overestimate.
The result flew in the face of the political science forecasting models, which pointed towards a hung parliament. One produced by electionforecast.co.uk suggested that the chance of a Conservative majority was so low it was a rounding error and so could be treated as functionally zero.
This was a prediction they shared with the majority of forecasting models, all of which were wrong in the same way. Speaking to the Financial Times in April, Will Jennings, a professor of politics at Southampton university who also works as part of a forecasting team, said: “It’s important to understand all the aspects of the model and that these are assumptions. If there’s something systematically wrong with the opinion polls, or . . . constituency polls, all of the forecasts could be out.”
With all the pollsters underestimating the Conservative party’s vote share, it looks likely that is precisely what happened. Some commenters point to a potential “shy Tory” effect, where voters are unwilling to admit that they support the party. However, telephone pollsters routinely found higher levels of support for the Conservatives than online polls during the campaign.
On the other hand, Ukip supporters suggested that the polls were underestimating the level of support for the anti-EU party because of similar shyness among their voters, yet the pollsters managed to accurately estimate their vote share of about 12 per cent.
Another possibility is that Labour struggled to turn those who claimed that they would support the party into actual votes. Turnout was around 66 per cent but in the last ICM poll before the election, 74 per cent of respondents said their certainty to vote was 10 out of 10.
Nate Silver, a US journalist who made his name predicting US elections and now runs the FiveThirtyEight website, wrote on Friday morning that betting markets and forecasting models were simply overconfident and need to build in the fact that errors like this have happened before, most notably in the 1992 election campaign.
“Polls, in the UK and in other places around the world, appear to be getting worse as it becomes more challenging to contact a representative sample of voters.” he added. “That means forecasters need to be accounting for a greater margin of error.”
Dr Matthew Ashton, a politics lecturer at Nottingham Trent university, identified one of the reasons for the discrepancy between poll results and the final outcome as a shift in the last few days of the campaign.
“We know that lots of Ukip voters were former Conservatives,” he says. “In the dying hours they might have looked at the close polls and decided that they preferred the guarantee of a Conservative government and an EU referendum over the risk of Ed Miliband and Labour.”
A Survation phone poll conducted on Wednesday as a check on its online polls had the proportion almost spot on, however Damian Lyons Lowe, Survation’s CEO, said that he “chickened out” of publishing it, given it was such an outlier. But it does provide some support to the idea that the other polls missed a late swing and in addition that one of the reasons for the others errors is the difficulty of properly sampling using online panels.