Political opinion polling has recently come in for some very heavy criticism, in different countries around the world; and yet this criticism continues to often be one-dimensional, and based on over-inflated expectations, without any appreciation of the myriad complexities that make up the influencers on people’s opinions and attitudes in this modern era of communication. Often times, pollsters get it right, but the UK general election of 2015, the Brexit Referendum and the US Presidential Election, both in 2016, all had a different outcome than the pollsters predicted, leading to negative headlines for the industry. A development which Jon Puleston, Lightspeed’s Vice President of Innovation argued ironically and said that these widely publicised ‘poll disasters’ were not actually that inaccurate from a comparative statistical perspective saying further that in fact, two of the three were more accurate than normal but just that they failed to predict the outcome, which is a slightly different thing. In his quest to find out what went wrong in the two elections, what became apparent was the lack of any centralized source of international data about polling accuracy to compare the performance of one election with another in an objective way. “Everyone I spoke to had a different way of assessing the accuracy of their polling activity,” he said.
Luckily a huge amount of polling information is freely available on the internet. “The challenge was to compile it all into a consistent analysable structure”, says Puleston.
Puleston not only provided a reality check, he also showed why polls don’t always get it right. “To be able to properly understand what went wrong with these prominent miscalled elections he needed to understand how big the size of the polling errors actually were in context to other elections around the world. It’s not until you start assembling data on a macro scale, as we have done, that you can actually get some generalized learnings.” It has become almost a popular sport for pundits to criticise polling, especially after the US election and UK referendum results. Puleston feels that some of this criticism was fair. The UK 2015 Election
polling win error was bigger than average and the UK Polling Council review of this result did reveal some shortcomings in methodology and lack of transparency. “Nearly every UK and many international polling companies have taken notice and responded to this, and I think this has already had a positive impact.” There were also some larger than average polling errors at a state level in the 2016 US election that were misleading.
These prompted an investigation from the US polling community that highlighted the difficulty of phone polling at a state level. “Because of the rise of mobile phones it is harder to geographically balance sample. Puleston sees some fascinating contradictions in the perception and actual performance of recent polls. Whilst the Brexit polls did not predict the outcome, the average polling error was less than average. “In any other election that was not so knife-edge you might have said polling companies did a pretty good job – especially when considering the well-known complexities of predicting referendum results, where there is no historical benchmarking data to assess who will vote.” In their contribution, Finn Raben, David Smith and IjazGilani Political Opinion Polls – Is Research the real culprit? , they argued that the debate on opinion polling has been held in the past but the key points that have been – and continue to be – actively debated, are often ignored or forgotten, as the latest inquisition gathers momentum. They mentioned some of the complexities to include “Political opinion” – by its very definition – is a changeable thing, and is dependent on the political body expressing it, the reason for expressing it, and of course the potential (political) benefit that will accrue from its expression! They said no longer are such expressions of opinion limited to one news broadcast, or indeed two print publications per day….quite the contrary. There are now a plethora of communication vehicles, social media and other publications that can carry an opinion – and indeed, anyone’s opinion – at any time of day.
According to them, this ‘opinion culture’ , coupled with broadcasters‘ need to constantly report “new news”, can often lead to the dissemination of uncorroborated or in the worst case, misinformed opinion, all of which can impact on the public, and the electorate s perception of events. A case in point , they said could be the recent tragic events in Paris; within a couple of hours, most of the 24 hour news channels were reporting that the French borders were closed; in fact, all ports of entry remained open throughout the events of the weekend. In the knowledge therefore, that the information environment is not always fully transparent, they argued, let us then try to place the demand of political opinion polling in context: despite the vastly increased “noise” of commentary, the myriad sources of “opinion”, often unfiltered (and uncorroborated!), there remains a consistent demand that any one poll, at any one point in time, should be absolutely accurate with regard to the final outcome of the election….. Really?
Had we polled the French public on that tragic weekend with regard to their perceptions of the status of their national borders, would the result have been “accurate”?, they asked. One of the frustrations for market researchers about the high expectations everyone has of point in time polling is that it clouds the perceptions of how market researchers work on more straightforward commercial studies. Researchers usually draw on a very wide range of commentaries and data points in order to build up a more ‘contextualised’ picture of what is happening. Thus, with the recent British elections, if seen as a brand assessment exercise, researchers would have assembled all the evidence we have about the main protagonists and work out who were likely to be the winners and losers. Working in this more holistic and eclectic way, there is a strong argument to suggest that the overall result of the British election would have been more clearly predicted. In this environment, the gap between expectations and outcomes may generally be found to have been caused in one of three main groups: Pollsters; Media Pundits and Politicians.
Firstly, pollsters need to deal with:
The date of polling, for each survey conducted…..to understand the potential impact from other media “noise”, and to place whatever events were topical, in context. After all, a pre-election poll is not a “forecast” until after the election! the size and structure of sample…… each of these can have a bearing on the results, and in any publication of findings, a clear understanding of these nuances is critical to understanding the results the handling of special issues pertinent to pre-election polls such as: Intention to vote, (by Region, compared with actual turnout; Uncertain voters who claimed they had not made up their mind;
VOTERS WHO HAD SWITCHED VOTING LOYALTIES.
An additional point – which although not espoused in developed electoral environments, does still appear to have some traction in less developed systems – is that of voters being constrained by
social pressure’ to reveal their preference, or to even fully articulate it until the actual moment of voting. All of these elements can have an impact upon public perception and conscience, thus influencing any one person’s response. The British Election study – conducted in the wake of the criticism surrounding the poll results of the 2015 General
Election – is one of the more definitive reference points on polling performance, and essentially concludes that in-home sample is better than online sample, and that too little money is spent on polling to ensure consistent and accurate results. Then, Media Pundits need to do some deeper soul searching about developing some consistent ‘rules’ on how to write about measures of popular mood(s).
The excessive blending of their personal views (desirability) on good government, with their analysis of (inadequately described) electoral behavior, can often lead to misinterpretation, and indeed, propaganda. A further layer of complication is then added, when we consider that there is a rising social gap between many pundits and a proportion of the electorate, reinforced by the ‘ghettoising’ effect of social media, now that so many social media channels are also available.
Lastly, the “gap” that can be found in Political interpretation is probably one of the most interesting… The complicated relationship between ‘votes’ and ‘seats’ has always represented a challenge in terms of predictability. The complication lies primarily in the interest of creating a balance between ‘the need to seek consensus’ and the need to enforce a ‘majoritarian decision’. Herein lies the fine act of achieving democratic pluralism. Democracy is, on the whole, a beautiful act of achieving balances, and although very hard to predict, the wisdom of the (electorate) crowd would appear (in most cases) to respect that. 2016 will undoubtedly see evidence from the UK that polls conducted with proper samples and in less of a rush to meet media deadlines, will give a more accurate measure of public opinion. We may also need to accept that deploying online samples (for cost reasons) to research topics for which this sampling approach is less suited, is not always “fit-for-purpose”. And lastly, when a poll is deemed to be “inaccurate”, then let us firstly examine the demands we made of the poll, before simply assuming that it was the poll that was wrong. In Africa, opinion poll is still not well known even though there are few pockets of it being conducted here and there.
According to Paul Nnanwobu , MD/ CEO Random Dynamic Resources Ltd., research companies which conduct opinion polls do so for internal consumption, as publishing of such results may be a very high-risk venture especially where the results do not favour the ruling government. This has obviously contributed to Africa’s less than 5% of the world public opinion research studies. We bring you the views of other market research professionals in this report.