Political data is taking pundits nowhere, they should turn to sociology
How did the media get it so wrong? Britain’s journalists are almost unanimously asking how they misjudged last week’s election result so spectacularly. Just over two months ago, the left-leaning New Statesman mocked Jeremy Corbyn with a front-page that read “Wanted: an opposition”. Last week, he increased Labour’s vote share by more than any leader since 1945.
Defending the magazine’s “objective” approach to reporting, the New Statesman’s Deputy Editor Helen Lewis took to Twitter to justify her position; “we looked at the polls, local election results & historical data”. Chris Deerin, another commentator, wrote; “we look at the polls. That’s the best science available, when the polls are wrong, we’re likely to be wrong”.
Neither of their responses were atypical, nor was their judgement of Corbyn’s fate. The media relied on a hypothesis, largely unchanged from the one Tony Blair had developed ahead of the 1997 election, that Labour could win votes only if it moved to the centre-ground. Political pundits repeated it as a monolith, with few exceptions. Just look at the evidence, they insisted.
For the media, politics was a science, and their hypothesis had been proven. The 1983 election, in which Michael Foot’s left-wing manifesto had been branded a “suicide note”, was their evidence. Socialist policies would never receive electoral support in the UK, we were told.
In science, we talk about “n numbers”. An n number is the sample size of your evidence; it might be the number of patients you’ve tested a drug on, or the number of experiments you’ve carried out on a piece of equipment. In politics, the n number would be the number of elections upon which a hypothesis has tested. The higher the n number, the more reliable the conclusion.
Here was the problem. The mantra that Labour’s electability was tied to political centrism had an n number of just one. Not only that, but the election it was based on had taken place over thirty years ago. This fact, as well as the dissimilarities between 1983 and present day Britain, was almost universally ignored by journalists.
But the media’s failure wasn’t just about bad science. What last week’s result proved was that commentators would be better advised to consider social narratives and trends than supposedly objective historical or polling data. What if politics isn’t best understood as a science at all, but as a sociology?
Sociology offers different forms of evidence – qualitative, rather than quantitative data – but this doesn’t detract from its value. In fact, political scientists admit that analysis of political data “seldom yields immediately conclusive results” in the way that journalists expect it to.
Despite this, media coverage of politics largely consists of two approaches; either debating the validity of evidence underlying policy proposals or speculating about anticipated election results using historical data. Rarely do journalists attempt to examine the social narratives, values and principles at the heart of political debate. As the commentator Paul Mason notes, “broadcasters have no theory of ideology”.
Writing after the EU referendum, I observed two significant turning-points which had gone unnoticed by the media. I believed that the referendum had marked a departure from the “market society” – a term coined by the Harvard sociologist Michael Sandel to describe how economic judgements had become a surrogate for all political decisions. As I wrote in April;
“The EU vote came to exist as much a referendum on the market society as a referendum on EU membership. Remain voters still believed economics to be paramount, while Leave voters considered economic opinion to have almost no significance at all. If “it’s the economy, stupid!” was once Bill Clinton’s famous tool in making sense of voter intentions, 2016 had provided the ultimate test of it.”
My observation held no objective, scientific evidence, largely because these questions weren’t being asked in the polls reported by the media. In fact, it’s questionable if such a shift could even have been captured by empirical data collection methods.
The second lesson from the referendum was the electorate’s changing relationship with evidence. Although dismissed by many as “post-truth politics,” this shift saw many voters connecting with values-based arguments rather than evidence-based ones. During the referendum, these values centred around national identity, patriotism and sovereignty, which succeeded against a “Project Fear” campaign laden with economic forecasts.
Last week’s election went someway in validating this sociological analysis. Labour’s campaign, centred around unequivocal values of social justice, fairness and universalism, stole votes from Theresa May’s repeated managerial and technocratic calls for economic “strength and stability”. While pundits and commentators pondered over polling data and the voting intentions of different demographics, a sociological narrative was repeating itself. Values-based arguments down-playing the significance of economic growth were once again winning through. Political science had failed to anticipate the result, while sociology had been largely ignored.
If Britain is experiencing a political crisis, then its media is facing an even greater one. Both the 2016 referendum and the 2017 election results showed the nation’s journalists to be wholly out-of-touch with the general public. Many have attributed this to a “metropolitan elite” dominating Fleet Street. This may be part of it. But there is also a much broader debate to be had about scientific literacy, the role of evidence in political reporting and the need for journalists to embrace sociology with the same respect as “scientific” data.
Politics is not a science and neither is journalism. Only when the media understand that might they start getting it right again.