Alastair Campbell was right about the polling problem
Despite my professional background, I essentially agreed with Mr. Campbell. The team at FiveThirtyEight, the American poll aggregator and election forecasting portal, has a podcast segment in which they examine a specific example of polls in the media and determine if it’s a “good use of polls or bad use of polls.”
When I look at polls in our own media landscape, I have to conclude that 95% of them are “poor use of polls”. Often, polls only contribute to coverage of politics as a spectator sport, where what matters is who is ahead, rather than the substantive issues of the day and what parties intend to do about it.
Take the media coverage of so-called horse racing polls, which give Labor’s lead over the Conservatives, the SNP’s lead over Labour, or the ‘No’ vs. ‘Yes’ lead.
At the time of writing, there have been 1,042 horse racing polls in Westminster since the 2019 election – one poll every 1.28 days.
Historical analysis of poll accuracy by the British Polling Council – a body chaired by Professor Sir John Curtice – suggests a practical error rate of four percentage points.
Of the more than a thousand horseracing polls conducted since the last election, 93 showed either Labor or the Conservatives to have deviated by more than this margin of error. In other words, less than a tenth of these polls have found anything that could be taken as indicating “real” changes in voting preferences. And yet we are inundated with coverage of these polls on a weekly basis.
Worse than reducing politics to a horse race is reducing the discussion of complex policy decisions to the question of whether or not a policy or decision is popular—perhaps the most common form of logical fallacy in political reporting today.
The notion that a policy or decision is right because a majority — or in many cases a mere plurality — of voters agree with it, or wrong because they disagree with it, is an obvious example of an ad argumentum populum, an appeal to popular opinion. Just because most people think something is true or good doesn’t mean it is true or good.
After Starmer recently rejected Labor’s pledge to remove the two-child benefit cap, critics have frequently encountered polls showing that keeping the cap was indeed popular. In Scotland, early coverage of the Rutherglen and Hamilton West by-elections criticized the SNP’s stance on the ceiling, based on a poll showing even its supporters divided on it.
But whether a policy finds support among the population is irrelevant to whether it is right or wrong. Lifting the cap would lift hundreds of thousands of children out of poverty, give them significantly better life chances, and bring long-term economic and health benefits that would almost certainly outweigh the costs.
Another problem with this type of polls is that they are interpreted as ironclad expressions of the will of the people, as if they were referendums on specific issues. This is not a poll. Aside from the many sources of uncertainty in poll research, even if polls were entirely accurate, they would still be an incomplete expression of public opinion.
Individual beliefs, values and preferences are complex. The webs of association they have in their minds between these and the policies and politicians we ask them about in surveys are ever-changing and subject to significant contingencies.
When we ask them if they agree with a policy like a politician or would vote for a party, we ask them to condense all that uncertainty and complexity into one very simple, quick judgment. How we phrase a question or the answers we provide them with can have a significant impact on how they respond.
Take net-zero polls, which typically show that voters do not support net-zero policies that require financial sacrifices, but also that voters want the government to take action to combat climate change, and that Supports overall goal of achieving net-zero carbon emissions by 2050.
The Conservatives’ response to such polls was similar to the Labor Party’s use of polls on the upper limit of the two-child supplement – they were guided by perceived public opinion in the hope of generating electoral gains.
Both are incredibly cynical uses of polls, and I firmly believe that such use of polls impoverishes policymaking and poisons political discourse.
However, there are productive uses for surveys. Follow specific data journalists, strategists, and pollsters on platforms like Twitter and Substack, or browse research agency blogs, and you’ll find thoughtful, in-depth analysis of what opinion polls are telling us and how politicians should respond.
Such analyzes often start from the case for a particular policy, assess public opinion and ask how politicians can persuade and engage voters.
Instead of looking at opinion polls and thinking, “There are the people. I must follow them because I am their leader,” or arguing in editorials that a policy is wrong because it happens to be unpopular, or that a decision was right because that party’s poll numbers improved by a single notch, opinion polls and Focus Groups can be used to understand how to lead effectively.
How can politicians win the net zero argument and gain public support for net zero measures? How can they advocate for action that will end child poverty, save the healthcare system, eliminate educational inequalities, or address a range of chronic challenges facing the country?
Polling can provide answers to these questions. Sure, certain survey purposes might amount to political junk food, but it doesn’t have to be that way.