Patrick Sturgis et al, “Report of the Inquiry into the 2015 British general election opinion polls”, London: Market Research Society and British Polling Council.

 download pdf

 external website

The opinion polls in the weeks and months leading up to the 2015 General Election substantially underestimated the lead of the Conservatives over Labour in the national vote share. This resulted in a strong belief amongst the public and key stakeholders that the election would be a dead heat and that a hung-parliament and coalition government would ensue.

In historical terms, the 2015 polls were some of the most inaccurate since election polling first began in the UK in 1945. However, the polls have been nearly as inaccurate in other elections but have not attracted as much attention because they correctly indicated the winning party.

The Inquiry considered eight different potential causes of the polling miss and assessed the evidence in support of each of them.

Our conclusion is that the primary cause of the polling miss in 2015 was unrepresentative samples. The methods the pollsters used to collect samples of voters systematically over-represented Labour supporters and under-represented Conservative supporters. The statistical adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree. The other putative causes can have made, at most, only a small contribution to the total error.

We were able to replicate all published estimates for the final polls using raw micro- data, so we can exclude the possibility that flawed analysis, or use of inaccurate weighting targets on the part of the pollsters, contributed to the polling miss.

The procedures used by the pollsters to handle postal voters, overseas voters, and un- registered voters made no detectable contribution to the polling errors.

There may have been a very modest ‘late swing’ to the Conservatives between the final polls and Election Day, although this can have contributed – at most – around one percentage point to the error on the Conservative lead.

We reject deliberate misreporting as a contributory factor in the polling miss on the grounds that it cannot easily be reconciled with the results of the re-contact surveys carried out by the pollsters and with two random surveys undertaken after the election. Evidence from several different sources does not support differential turnout misreporting making anything but, at most, a very small contribution to the polling errors.

There was no difference between online and phone modes in the accuracy of the final polls. However, over the 2010-2015 parliament and in much of the election campaign, phone polls produced somewhat higher estimates of the Conservative vote share (1 to 2 percentage points). It is not possible to say what caused this effect, given the many confounded differences between the two modes. Neither is it possible to say which was the more accurate mode on the basis of this evidence.

The decrease in the variance on the estimate of the Conservative lead in the final week of the campaign is consistent with herding - where pollsters make design and reporting decisions that cause published estimates to vary less than expected, given their sample sizes. Our interpretation of the evidence is that this convergence was unlikely to have been the result of deliberate collusion, or other forms of malpractice by the pollsters. On the basis of these findings and conclusions, we make the following twelve recommendations.

BPC members should:

  1. include questions during the short campaign to determine whether respondents have already voted by post. Where respondents have already voted by post they should not be asked the likelihood to vote question.
  2. review existing methods for determining turnout probabilities. Too much reliance is currently placed on self-report questions which require respondents to rate how likely they are to vote, with no strong rationale for allocating a turnout probability to the answer choices.
  3. review current allocation methods for respondents who say they don’t know, or refuse to disclose which party they intend to vote for. Existing procedures are ad hoc and lack a coherent theoretical rationale. Model-based imputation procedures merit consideration as an alternative to current approaches.
  4. take measures to obtain more representative samples within the weighting cells they employ.
  5. investigate new quota and weighting variables which are correlated with propensity to be observed in the poll sample and vote intention.

The Economic and Social Research Council (ESRC) should:

  1. fund a pre as well as a post-election random probability survey as part of the British Election Study in the 2020 election campaign. BPC rules should be changed to require members to:
  2. state explicitly which variables were used to weight the data, including the population totals weighted to and the source of the population totals.
  3. clearly indicate where changes have been made to the statistical adjustment procedures applied to the raw data since the previous published poll. This should include any changes to sample weighting, turnout weighting, and the treatment of Don’t Knows and Refusals.
  4. commit, as a condition of membership, to releasing anonymised poll micro-data at the request of the BPC management committee to the Disclosure Sub Committee and any external agents that it appoints.
  5. pre-register vote intention polls with the BPC prior to the commencement of fieldwork. This should include basic information about the survey design such as mode of interview, intended sample size, quota and weighting targets, and intended fieldwork dates.
  6. provide confidence (or credible) intervals for each separately listed party in their headline share of the vote.
  7. provide statistical significance tests for changes in vote shares for all listed parties compared to their last published poll.

« Combining national and constituency polling for forecasting | Publications List | Simulating Counterfactual Representation »


I am an Associate Professor of Social Research Methods at the London School of Economics and Political Science. From 2016-2019, I am an Associate Editor of the American Political Science Review. My research is focused on the measurement of political preferences from survey, voting, network and text data. Applications of these methods have included citizens, legislators and judges in the US, UK and EU.

Curriculum Vitae pdf


To receive updates from this site, you can subscribe to the RSS feed of all updates to the site in an RSS feed reader