Postscript, Sunday 10th May 2015:
The article below was written on Thursday 7th May, before the polls had closed on this extraordinary UK Election. Then the unexpected result came through, met by universal shock. Against all the expert forecasts and final odds of 14/1 on Betfair, the Conservatives won the 325 seats required for an 'overall majority' in the House of Commons, delivering one of the greatest upsets in British electoral history. David Axelrod, President Obama's election strategist described it as the most stark failure of polling that he has ever seen. The British Polling Council (BPC) called an immediate inquiry. The final tally of 331 Conservative seats was 41 seats higher than the last betting market estimation from Sporting Index of 290 seats. It was 50 seats higher than the daily average of 281 seats from the three main academic forecasters: Electionsetc.com, Electionforecast.co.uk and the Polling Observatory (16 April 2015 - 6 May 2015 inclusive). The final prediction from Electionforecast, released on the 7th May was also for 281 Conservative seats. Given these forecasts and the betting markets had barely moved ten or fifteen seats in the preceding six months, the scale of the sudden movement was stunning. If, that is, the change was 'a sudden movement' or 'late swing'. The Conservatives may have been leading all along, just never measured as doing so. The 'stewards inquiry' by the BPC will discover this and its causes (Lib Dem / Labour 'switchers' or Ukip 'penumbra'?) Either way, the betting markets and the academic forecasts had got the result badly wrong, particularly the 'expert' academics. Both groups will struggle to pass blame onto the miserable collective effort of the pollsters, upon which their forecasts were very largely constructed. The emphasis given to polling data in any forecast model or individual voter or betting decision is intrinsic to the quality of that decision, each forecaster has the power to reject such information.
(Footnote: The betting market figures are taken solely from the Sporting Index spread market, the most liquid of all political betting markets where seats are traded like a share price. By contrast, the seat forecasts from fixed odds bookmakers such as Ladbrokes and Betfair are derived of probabilities from their relatively illiquid constituency betting markets and are therefore a less reliable guide to crowd wisdom. Crucially these fixed odds company predictions are not tradable and are produced for public relations purposes only. Unsurprisingly, during the course of the campaign these poorer predictions have been in regular arbitrage, notionally at least, with the tradable spread betting markets.)
The article below was written on Thursday 7th May, before the polls had closed on this extraordinary UK Election. Then the unexpected result came through, met by universal shock. Against all the expert forecasts and final odds of 14/1 on Betfair, the Conservatives won the 325 seats required for an 'overall majority' in the House of Commons, delivering one of the greatest upsets in British electoral history. David Axelrod, President Obama's election strategist described it as the most stark failure of polling that he has ever seen. The British Polling Council (BPC) called an immediate inquiry. The final tally of 331 Conservative seats was 41 seats higher than the last betting market estimation from Sporting Index of 290 seats. It was 50 seats higher than the daily average of 281 seats from the three main academic forecasters: Electionsetc.com, Electionforecast.co.uk and the Polling Observatory (16 April 2015 - 6 May 2015 inclusive). The final prediction from Electionforecast, released on the 7th May was also for 281 Conservative seats. Given these forecasts and the betting markets had barely moved ten or fifteen seats in the preceding six months, the scale of the sudden movement was stunning. If, that is, the change was 'a sudden movement' or 'late swing'. The Conservatives may have been leading all along, just never measured as doing so. The 'stewards inquiry' by the BPC will discover this and its causes (Lib Dem / Labour 'switchers' or Ukip 'penumbra'?) Either way, the betting markets and the academic forecasts had got the result badly wrong, particularly the 'expert' academics. Both groups will struggle to pass blame onto the miserable collective effort of the pollsters, upon which their forecasts were very largely constructed. The emphasis given to polling data in any forecast model or individual voter or betting decision is intrinsic to the quality of that decision, each forecaster has the power to reject such information.
(Footnote: The betting market figures are taken solely from the Sporting Index spread market, the most liquid of all political betting markets where seats are traded like a share price. By contrast, the seat forecasts from fixed odds bookmakers such as Ladbrokes and Betfair are derived of probabilities from their relatively illiquid constituency betting markets and are therefore a less reliable guide to crowd wisdom. Crucially these fixed odds company predictions are not tradable and are produced for public relations purposes only. Unsurprisingly, during the course of the campaign these poorer predictions have been in regular arbitrage, notionally at least, with the tradable spread betting markets.)
In this regard, this article provides new insights into
how the betting markets and the expert forecasts moved during the campaign, both relative to each other and over time. On Twitter, the more vocal elements on both sides have struggled to conceal their antipathy towards the other at times. Bettors have charged experts with arrogance and unworldliness, whilst experts readily dismiss bettors as simplistic or partisan. No surprises then that large differences between the two sets of forecasts have opened up, despite their self-professed mutual reliance on the same (now suspect) polling data. In the light of these differences and the systemic issues revealed in the polling, it is worth speculating whether the betting markets were, late in the day, becoming increasingly sceptical about the reliability of the polling, and if so why? In short, the market could have grown wise with age by beginning to discount the polling info - whilst the academic forecasts made no change to the powerful influence of vote intention polling on their models. One rationale for discounting, given prior to the event, comes from Matt Singh of Number Cruncher Politics, who argued that the relatively low vote share polling 'snapshots' for the Conservatives were incompatible, historically, with the relatively high polling numbers for Conservative 'economic competence' and 'prime ministerial competence' ratings. No previous party had failed to win overall power with such good fundamentals. If the market was making this judgement against the reliability of the polls, it may explain the last few day divergence between the betting line predictions and the expert forecasts (see fig 3 below) - although a differential was in fact opening up long before the last few days of the campaign. This would be a rational or economic explanation for the divergence, as opposed to the common charge used against betting market accuracy of right-leaning bias. Punters tend to be right-leaning, particularly the city based clientele of Sporting Index, and they like backing their favoured team, the Conservatives.
Original article, Thursday 7th May (describing the performance of experts and betting markets in predicting the election):
Getting political prediction right, counts. From influencing capital markets to cutting through the bluster of political bandwagons, decent forecasts are also central to political strategy, like when to adopt riskier vote-winning plans or shore-up core support with basic messages. Perhaps most importantly, voters must also judge correctly how well parties are faring if they are
to vote tactically. As Stephen Fisher
of Oxford University has pointed out, in the ultra-competitive contest of GE2015, new
types of tactical behaviour are emerging called ‘coalition-directed
voting’. An example of this are Labour voters supporting the Conservatives in the short term, against a feared
SNP bid for a coalition with Labour, so long as Labour aren’t doing sufficiently well to get an overall
majority themselves (in which case the SNP would be less of a threat). Where though, should
these voters turn for a reliable prediction of the result to inform such tactics? Are the betting lines or the expert academic models best? Is there an alternative source of crowd wisdom beyond the potentially biased judgements of academics and betting crowds?
From late 2014, I began noting down the daily forecasts made by three separate groups. Firstly a collection of academic 'expert' judgements,
secondly the forecasts of groups of ‘punters’ reflected in betting
markets, and finally, a more random sample of the mass public predicting their own local constituency results. Analysing this basic data, I ask now which group has been leading or following the
other? The results show that academics
have done well in leading insights on the high likelihood of an SNP landslide
in Scotland and also a hung parliament outcome, should these outcomes materialise.
However the experts and the public still diverge on the relative
performance of Labour versus the Conservatives. Because of this, there are going to be very public winners and losers when the actual results become
known from tomorrow. In summary, if the
Conservatives beat expectations and get in excess of 290 seats (and Ukip get
more than two or three seats), then it will be a vindication for the betting
market and mass-public crowd. If Labour
does well, say more than 275 seats, the academic crowd will be closer to the actual
result. Either way, we must remember this is only one
skirmish between each crowd – just one roll of the dice. One side will have won
a battle and not the war, although it may feel worse for the losers tomorrow
morning. The debate will rage on about
which crowd is the wisest.
Experts v The Public
Before the final definitive verdict tomorrow, your instinctive
preference for either the expert or the mass public view will probably depend on a
hunch rooted in one of two alternative views of human knowledge. For millennia man has debated whether the
best guide to the world around us is the knowledge or reason of the few (a
Platonic tradition), or the practical experiences of many (an Aristotelian
tradition). Suffice to say here, some
forecasters believe future voting behaviour is a highly technical matter, which
is best understood by a few experts, capable of weighing up the validity of key
theories and managing masses of data to refute or corroborate them. The mass public wanders
ignorantly, befuddled, helpless, deaf and dumb, without discernment. If you believe this elitism, then you probably follow
the academic crowd for now, until they are proved right or wrong by the
results. Alternatively, you may feel
sceptical that a collection of academics, or punters, can be sufficiently diverse
and independent in forming their opinions, and are more prone to either behave
in a partisan or ‘herd-like’ manner. In this case you are a pluralist or a 'liberal' in its truest meaning. You
may also feel that voting behaviour is too complicated and delicate a matter,
driven by a mass of local-level (constituency) information that is beyond the
comprehension of a bunch of academics and their whirring models, or even a
crowd of incentivised bettors. If you take the argument this far, you are probably an economic liberal like Margaret Thatcher's favourite economist / philosopher, F.A. Hayek. There is certainly a strong element of Hayekian epistemology (theory of knowledge) in why betting markets are superior to the cleverest individuals at prediction: the environment of the voting decision is like a vast information storage system, which can't be fathomed by any single individual, like the mass of supply and demand cues of an economic market cannot be understood by one 'planner'.
In this case, somewhat bravely, you could turn to the crowd in its purest form, trusting mass publics across every constituency to tell you, collectively, who they think will be their next MP. This is perhaps the ultimate compilation of localised crowd-sourced intelligence in a British election. For the first time in electoral studies, online polling technology is revealing new insights into these predictive views of the mass public, allowing us to ask more of them, more regularly and more cheaply. Much has been made about the increased volume of polling on how people intend to vote because of this online polling revolution. In the last six months of 2009 there were just 103 vote intention polls. In the last six months of 2014 there were 283. But the vote intention question is mainly fodder for the academic models and betting calculators, as is the welcome addition of Lord Ashcroft’s constituency polling by telephone, recognising the need to find local patterns in voting swings in a post-UNS (uniform national swing) multi-party election world. By contrast, the true ‘crowd wisdom’ question is not how people will vote, but who do they think will win? To do this on a constituency-by-constituency basis, to mine a new mass of localised knowledge, requires vast national surveys calling on enormous online panels. Even YouGov’s 600,000 strong UK panel is not really big enough for the job, yet. This type of crowd forecasting will improve with time, as we are able to ask sufficient volumes of people in each individual constituency for their predictions. The key assumption is Condorecet’s Jury Theorem which states that if group members have a greater than fifty per cent chance of making the correct decision (they have at least sone wisdom), then the probability of a correct majority vote will increase rapidly towards unity as the group size increases to infinity (Condorcet, 1785, Murr, 2015). So if the crowd are 50.001 per cent right then that is good enough. The prediction will only get better as more people are asked the question. Unfortunately, the same logic works in reverse. If the crowd are 49.999 per cent right, the crowd will get more reliably wrong as members are added.
In this case, somewhat bravely, you could turn to the crowd in its purest form, trusting mass publics across every constituency to tell you, collectively, who they think will be their next MP. This is perhaps the ultimate compilation of localised crowd-sourced intelligence in a British election. For the first time in electoral studies, online polling technology is revealing new insights into these predictive views of the mass public, allowing us to ask more of them, more regularly and more cheaply. Much has been made about the increased volume of polling on how people intend to vote because of this online polling revolution. In the last six months of 2009 there were just 103 vote intention polls. In the last six months of 2014 there were 283. But the vote intention question is mainly fodder for the academic models and betting calculators, as is the welcome addition of Lord Ashcroft’s constituency polling by telephone, recognising the need to find local patterns in voting swings in a post-UNS (uniform national swing) multi-party election world. By contrast, the true ‘crowd wisdom’ question is not how people will vote, but who do they think will win? To do this on a constituency-by-constituency basis, to mine a new mass of localised knowledge, requires vast national surveys calling on enormous online panels. Even YouGov’s 600,000 strong UK panel is not really big enough for the job, yet. This type of crowd forecasting will improve with time, as we are able to ask sufficient volumes of people in each individual constituency for their predictions. The key assumption is Condorecet’s Jury Theorem which states that if group members have a greater than fifty per cent chance of making the correct decision (they have at least sone wisdom), then the probability of a correct majority vote will increase rapidly towards unity as the group size increases to infinity (Condorcet, 1785, Murr, 2015). So if the crowd are 50.001 per cent right then that is good enough. The prediction will only get better as more people are asked the question. Unfortunately, the same logic works in reverse. If the crowd are 49.999 per cent right, the crowd will get more reliably wrong as members are added.
Despite a rather small sample for the job, this idea got an
early run out at this election thanks to the fascinating work of Rookie political scientist Andreas Murr
of Oxford, who drew on an internet survey of 17,000 voters conducted by YouGov
in February of this year (that is an average of just 25 members of the public
in each constituency). If it does well,
then the future of predicting elections could be about to change. We may become attached to asking vast online
panels what they think will happen, harnessing new seems of local information in vital local areas previously passed over previously by group of
experts. Online panel growth is the
disruptive technology that may be changing not only how future political events are forecast, but sporting ones too. Swap 'parliamentary seats' for 'Division 2 season points' and the same benefits from exercising crowd wisdom apply. The wisdom of crowds idea, made applicable because of the internet and growing online panels, dangles valuable new insights for bookmakers and the betting public alike. And the online panel companies understand this. They consider their panels not just a source of predictive information, but also the basis of new social communities of predictive activity, a sort of gambler's Facebook. Rather than asking panellists boring survey questions to obtain obscure football league information and aggregated wisdom, why not offer them the chance to reveal their knowledge by playing games at the same time, such as Fantasy Football Manager? The growth of online poker showed how gaming communities spring up quickly between ego driven poker players. Predictive sporting tourneys may be next. YouGov are the most advanced British online panel company to attempt the 'gamification' of their panel experience, in a bid to morph from market research firm to social media giant. It faces stiff competition from the bigger American firms such as Research Now, who now have over six million global panel members. It remains to be seen whether British YouGov have the capacity to out innovate their American competition to grab the bigger prize. I shall return to the constituency level crowd wisdom model at the end of this article.
Expert Academics
Back to March of this year, just over two months before the election poll. The scene is a large purpose-built lecture room at the
London School of Economics. Like a sugar
addict in a sweet-shop, I was enjoying my favourite day in academia since
leaving the bookmaking industry in 2009.
The subject of the scholarly inquiry was prediction, but the personnel
involved were far removed from shift-race-goers or grubby betting shop
types.
Sixty of the world’s finest political scientists
specialising in voting behaviour and opinion polling were having their
traditional pre-election day get together.
Here were the brightest collection of individuals I’d seen
assembled under one roof. And they were
staking their hard won reputations, not money.
Many had flown in from around the world to review a dozen different seat
forecast models predicting the UK General Election, the blue-ribbon political horse-race
of the global elections calendar. The
winning model would be the one that contained least ‘total seat error’ between
predicted seat totals for each of the parties and the actual result on May 7th.
This year the contest had the makings of a classic. Heightened interest among the betting public was making it the biggest non-sporting betting event of all time, with turnover up three-fold on 2010, according to Ladbrokes. The British contest, already tight, had become even more of a challenge, spiced by the presence of two live dark horses in the field with little previous form: the SNP and Ukip. The extent to which these unknown variables could weigh heavily on the voting result was causing much debate and uncertainty. Many academics who had based their careers on the reliability of the grand old model of UNS (Uniform National Swing) were feeling sombre. There was a deep concern that the British first-past-the-post system might not be able to cope with the new multi-party dynamics. These concerns it was felt, were made more serious by the central implication of the new Fixed Term Parliament Act, designed to facilitate the last coalition government, which makes an emergency ‘ad hoc’ election now more difficult to call. Therefore the academics worried. The British polity could be condemned to five years of unstable and ineffective government based around vote-by-vote deals, uneasy coalitions or delicate ‘confidence and supply’ arrangements.
This year the contest had the makings of a classic. Heightened interest among the betting public was making it the biggest non-sporting betting event of all time, with turnover up three-fold on 2010, according to Ladbrokes. The British contest, already tight, had become even more of a challenge, spiced by the presence of two live dark horses in the field with little previous form: the SNP and Ukip. The extent to which these unknown variables could weigh heavily on the voting result was causing much debate and uncertainty. Many academics who had based their careers on the reliability of the grand old model of UNS (Uniform National Swing) were feeling sombre. There was a deep concern that the British first-past-the-post system might not be able to cope with the new multi-party dynamics. These concerns it was felt, were made more serious by the central implication of the new Fixed Term Parliament Act, designed to facilitate the last coalition government, which makes an emergency ‘ad hoc’ election now more difficult to call. Therefore the academics worried. The British polity could be condemned to five years of unstable and ineffective government based around vote-by-vote deals, uneasy coalitions or delicate ‘confidence and supply’ arrangements.
The academics faced a quandary. As top forecasters, they recognised the deep
uncertainty of the General Election result, but also knew that in the current
academic era, there was public demand for a confident forecast. Deep down, they felt there were no simple
answers, no one idea that explained it all.
In the terminology of expert political judgement (Tetlock, 2005), those in
the room were ‘foxes’ not ‘hedgehogs’, scrappy creatures who believe in a
complicated synthesis of many little ideas and multiple approaches towards
solving a problem. The best electoral
forecasters in the world had not gained their reputations through one knock-out
punch, but hard research and gradual learning.
But now journalistic simplicity was required of them, whilst maintaining
their honesty. Their industry was
booming and gold was on offer. The
political science of predicting elections was in rude health, popularised by
Nate Silver’s best-selling book ‘The Signal and the Noise’ (2012). And in an era where academics are demanded to
make an impact on wider society to secure research funding, a premium is placed
on being ‘outward-facing’. Torn
therefore, by intellectual modesty and humility on the one hand, and public
demand for simple answers on the other, some stars of the field are emerging
within British political science who can do both. Among those presenting models was Rob Ford,
well known for his book ‘Revolt on the Right’ on the rise of Ukip, co-authored
with Matthew Goodwin which won the Political Book of the Year in 2015. (Matthew is writing an eagerly awaited update
book on Ukip, for which I am a researcher).
Also demonstrating his model was Chris Hanretty from the UEA and his
team (Benjamin Lauderdale, Nick Vivyan and Jack Blumenau), whose
electionforecast.co.uk is now an integral part of the BBC’s Newsnight coverage,
and who also provides the UK election model for Nate Silver’s booming FiveThirtyEight US media
business. Presiding over the event
were the eminent John Curtice and Simon Hix, with guest of honour, Sir David
Butler, inventor of the Swingometer and a founding father of psephology, now
90. In short, here was the ‘expert
crowd’ of GE2015 in one room. If good
judgement on the General Election is related to high IQ and years of learning,
this was the place to find it.
Fig. 1 ‘The Expert Crowd’. The young
rising stars of British political science (in background, Chris Hanretty, Rob
Ford and Stephen Fisher) are grilled about their General Election predictive
models by older established names, John Curtice, Sir David Butler (90), Simon
Hix (standing) and Vernon Bogdanor – Foreground
Table 1. ‘Expert Crowd’ predictions 2015 Forecasting Conference, LSE, 27th
March 2015
I think there are two possible doubts about the claim made
by the researchers of the Political Studies Association expert survey (Chris Hanretty and Will
Jennings) that averaging academic forecasts engenders ‘wisdom of crowds’
benefits. The first problem is the
diversity and independence of the group.
Both these factors are fundamental assumptions of James
Surowieki’s theory, discussed in an excellent early podcast (2004) from the
man here,
yet the congregation at the LSE in March and the Expert survey of PSA members
looks little more than unrepresentative ‘expert panels’. In short, the models and the judgements of
both may be drawing on insufficiently narrow sources of information,
concentrating heavily on vote intention opinion polling, compared to the
information resources utilised by the mass of the general public up and down
every constituency in the land. The fear
exists that the expert models sink or swim with the polling, however well
aggregated and weighted, along with some adjustments for historical behaviour
such as a late Conservative rally or ‘reversion to mean’ (voters tend to fall
back on what they did before) which turns ‘snapshots’ of polling into
predictions of future results. Worse, a
certain ‘group-think’ or ‘herd mentality’ may be in-play, making the academic
crowd not independent of each other. In
particular, for the members of the Political Studies Association, their high
level of political knowledge may be of each other’s well publicised work,
continually circulated to each other, particularly the models of the Polling
Observatory, Elections Etc, and Electionforecast.co.uk. Yes they have produced different forecasts,
but to what extent are these differences the result of widely sourced data, or
merely subjective adjustments to their working parts, or their assumptions? Voting behaviour scholars from America,
particularly Michael Lewis-Beck, made this objection to the British models at
the LSE conference. He felt there were
just too many ‘moving parts’, too many formulas and assumptions in the models
for the results to reflect the data rather than inevitably the forecasters subjective viewpoints. And here we come to the second problem. Potential bias inherent within the academic
community.
Academics have a habit (often irritating to the public) of
insulating themselves from the charge of bias by claiming superior knowledge,
in particular command of their own data.
More than often it is justified, but it is a problem when it this becomes almost cultural, to the extent that
new information from other sources can be dismissed without proper
consideration. The charge here is of a
certain intellectual arrogance or hauteur, and it's not hard to find among
political scientists, however ‘fox-like’ they are when approaching their own
data. Take for example a letter that was
sent among an elite sub-group of the PSA members that specialise specifically
in voting behaviour and elections (EPOP), about the relative results of their
own grouped opinions in the recent PSA survey.
Somewhat contemptuously, it reminded the rest of the group that:
Colleagues will have noticed the PSA’s survey of election experts last week… (We) have separated out the predictions from those who actually know things about elections (ie, us) from those who don’t (everybody else).
Whilst this may have been ‘tongue-in-cheek’,
there is little hiding the considerable doubt held by academics about the quality of public judgement more generally, a theme running through voting behaviour work dating back to the seminal 'American Voter' studies of the 1960s. These painted a picture of the public as intellectually disorganised, affective in their motivations when not largely uninterested and uninformed about representative government (Campbell et al, 1960; Converse, 1964). Why should this group be any better at predicting politics than they are practicing it, as measured by the public forecasts implied in
betting market prices. Here is not the place for a debate about whether
betting prices benefit as ‘opinion backed by money’ or mislead with a
republican / conservative confirmation bias, simply to show for now the
arbitrage or mismatch of estimates between some of the academic models and the
public markets. Most startling was the
initial forecast of the Polling Observatory, which opened on the 16th
March forecasting 265 Tory seats whilst the leading city spread betting firm,
Sporting Index, was forecasting 282. It
is hard to say whether there is partisan bias in the markets or the academic
forecasts from this alone, but the arbitrage between the two throws some caution on both, a
dispute which will only be resolved by comparisons of a long run of forecasts
with results. For now, whilst the
academics can charge the markets with a pro-tory bias, the bookmakers can
equally claim that the academics do not represent national political opinion
either, as this self-selecting survey, conducted by the Times Higher Education
Supplement this April showed.
Just four out of 1,019 respondents said
they supported Ukip, eleven per cent Conservative and forty-six per cent
Labour.
Without knowing the final result as yet, how can we resolve who has performed better over the course of the election campaign, and therefore shed light on who is less prone to bias or wiser to information: the betting market crowd or the academics? One method is to look at which side has converged in its predictions on the other (indicating that it is following), or whether one side has diverged in its opinions (indicating it is opposing the other), or whether they have mutually converged on each other in their estimations as the campaign has progressed (indicating increasing agreement).
Con v Lab
Firstly, let’s look at the question of bias within betting
markets, favouring the right because punters tend to be conservative leaning,
and also within academic crowds, because public sector university workers tend
to be much more left-wing than average. In
fact, as figure 2 shows, there is actually very little disagreement between the
average of the three academic models predicting Conservative seats, and the
betting market from city spread firm,
Sporting Index.
Figure 2
The betting market average line for the last 50 days of the campaign is just 4.4 seats higher than the average for the academic lines. The model from Stephen Fisher has indeed been more consistently bullish about Conservative chances (averaging 290.8 seats) than the Sporting Index market, averaging 285.5 seats. However the academic average is lowered by the bearish forecast from the Polling Observatory, averaging just 269.5 seats for the Tories over the last part of the campaign. Despite drawing on pretty much the same polling evidence, a range of over twenty seats between the bottom academic forecast of Tory seats and the top, must leave the public wondering who to believe? The charge that a set of subjective assumptions is ultimately driving these models, which are over-laden with ‘moving parts’, has added weight.
At this stage, I am dropping the top and bottom forecasts to
concentrate on the middle forecast of Hanretty, Lauderdale and Vivyan
(electionforecast.co.uk) in comparison with the betting lines from Sporting
Index, as it seems closest to the average of the academic prediction, and
therefore fairly representative. Figure
3 shows the daily over-time trends in both lines since 1st December
2014. How have the two predictions
related to each other over the course of the long campaign? Firstly, both lines are clearly close to each
other in forecasting Conservative seats, and it is not immediately noticeable
that there is much difference in the predictions prior to mid-March. In the first part of the date range between 1st December and 15th
March, the daily average for each was between 282.4 (academic) and 282.2
(betting). It is only in the run-up to
polling day from the 16th March that the lines start to diverge, as
the added volume of money for the Conservatives pushes the betting line above
the academic forecast line, ending nine seats higher on polling day at 290 seats
versus 281 seats from electionforecast.co.uk.
Here there is genuine disagreement between the betting crowd and the
expert crowd. Was the market coming to distrust the static polling lines? Bettors think that the
polling, by itself, underestimates the Conservative seat total by around 14
seats (using YouGov’s ‘Nowcast’ for 6th May of 276 Conservative
seats which translates their polling numbers into seats on that day). The academic line, driven largely by polling,
also overestimates the final Tory seat tally compared to the YouGov 'Nowcast' estimate, because
it also expects a last minute rally for the party. But compared to the YouGov number, this is only
five more Conservative seats. In a tight
contest, who is right in this dispute could be crucial, and we will only know
tomorrow morning.
Figure 3
SNP
The battle between the betting crowd and the expert crowd is
multi-faceted at this election, it does not just depend on who gets the balance
between Conservative and Labour seats right.
One of the major stories of the campaign has been the rise of the
SNP. Here the picture paints expert
wisdom in a favourable light to betting market wisdom, because the forecast of
electionforecast.co.uk was spectacularly efficient in estimating early – way
before the betting markets – a deluge of extra seats for the SNP. Figure 4 shows how the betting market line of
Sporting Index SNP seats took a while to converge on the expert academic
line. This moment was one of the major
opportunities to make money in this campaign and shamelessly, I must admit to flagging it up
here,
at the beginning of December, although I didn’t have a bet myself! The betting public have been reluctant to
join the SNP bandwagon, and once again, this may be because punters (south of
the border at least) have been reluctant to support the Scottish Nationalists
for partisan reasons. It is noticeable
that once again, in the dying days of this campaign, the Sporting Index betting
line has dipped below the academic line.
This shows there are plenty of SNP sellers about still, and the firm
Sporting Index, may be in the enviable position of cheering on SNP wins, whilst
shedding tears at tight Conservative victories tomorrow morning.
Figure 4
Figure 5
Ukip
However, this criticism of the Ukip betting crowd, draws
heavily on hindsight and ignores how the campaign has played out. Farage has failed to spark a serious media
bandwagon, perhaps because the SNP took it (analysis of media citations of
party leader names over the course of the campaign, suggests this). When Sporting Index opened their market, I
discussed the matter with their market-maker (Aidan Nutbrown), and there was considerable
uncertainty about what Ukip could achieve, as well as recognition there would
always be buyers of Ukip out there. Ladbrokes even
had a market on the party gaining 100 seats or more. The situation was too uncertain for the
betting market to apply a strict mathematical formula of votes to seats, like with
the academic models, because expectations existed at the time that they could break
through. Further, there are strong
reasons that the electionforecast model was
not recognising local campaign effects which could play into the hands of
Ukip. Whilst a small and inexperienced
party at campaigning, it could at least target ten or more seats relatively
hard. Difficult to assess micro level
factors were never really considered by the largely macro models of electionforecast,
and arguably still aren’t. It is still
quite possible that Ukip could win 3-4 seats, 3-4 times what electionforecast
have consistently predicted. Like the
Conservative / Labour seats battle, the Ukip question will only be resolved
tomorrow.
Conclusions
The research above has looked at whether the ‘expert crowd’
or the ‘betting crowd’ has performed better as a guide to the likely election
outcome. On certain questions, requiring a high level of quite technical political knowledge, notably SNP seats and
whether there will be a hung parliament, the experts have led the way. The betting public have gradually come into
line with what the experts have been saying, pretty much all along. However, looking at the Labour-Conservative
contest, and whether the betting crowd has displayed a right-wing bias or the
experts a left-wing one, it is perhaps still too early to say. We will have to wait for the result to assess the bullish betting market forecast about Conservative seats, relative to the expert academic forecast. One notable feature of the academic models at
least, as opposed to the survey of 500 'trade association' members, is the
remarkable variance in their forecasts. Taken individually, they leave the
public and potential tactical voters none the wiser as to what will happen. The average however is close to the view of
the betting markets (just 4.5 seats lower), so the bias theory in either
direction may be overstated. Forecasters
are as keen as the betting public to get this election right.
Lastly, let’s give final consideration to the mass public as
a whole - the voters who will decide this election. Although regularly reviled for their lack of
wisdom and herd-like behaviour ever since the damning 19th Century crowd studies of Gustave Le Bon, the mass public are the largest group and inherently diverse and more independent when compared to academics or bettors. However, their view, with the exception of endless anecdotal TV and radio interviews particularly by the BBC, has often been drowned out in this election by elite opinion echoing through social media channels. Bookmakers too have been guilty of not giving voice to their punter’s views, instead talking about their own opinions, as if the views of their traders and PR men matter more than their customers. Good bookmakers, very rarely ignore the betting behaviour of their customers. Ladbrokes in particular, via its otherwise excellent spokesman Matthew Shaddick, has been all to keen to rubbish the crowd wisdom behind political betting, also a cardinal sin in bookmaking PR. For example, on 27th March he said that betting markets are 'not a particularly good predictor of the results'. If a Ladbrokes spokesman is not capable of standing behind the inherent forecasting value of his own markets, who is?
The addition of a massive free-play public prediction tournament (100,000 players plus), rather like the hugely successful SkyBet Saturday Super Six game on football, conducted online, and yielding crowd sourced seat forecasts, would have added greatly to our understanding of mass public crowd wisdom on the election. Demonstrating a trust in public judgement that has been sadly lacking during the election, It could have been run by the media, bookmakers or both, pitching the wisdom of the widest possible crowd against expert opinion, extending the range of participants beyond the narrower confines of those who place real money bets on politics and their associated biases.
The mass public have also been tapped for their insights via focus grouping, as featured in Lord Ashcroft's regular polling reports. Sometimes however it feels this element of Ashcroft research is included to carry a joke or two about politicians or to add colour next to the dry survey reports, rather than uncover some serious insight on mass crowd wisdom. There have also been two monthly quantitative surveys by other polling companies, which include items looking at general election public prediction, one by TNS and the other by ICM for the Daily Telegraph / Guardian. The most recent was by ICM for the Guardian yesterday (6 May). Whilst the vote intention question showed support evenly divided between Labour and Conservatives on 35 per cent each, the prediction question suggested a three point victory margin for the Conservatives on 35 per cent versus Labour on 32 per cent.
The wisdom polling asks the respondent to predict the next government or Prime Minster. The problem with national 'wisdom' polling, as with the mechanism of Uniform National Swing that translates vote shares into seat shares, is it may obscure critical patterns in local support at the constituency level that have developed as our two party system has fragmented. To garner information from the local publics on this key local information, more promising are 'local MP' questions, with the hope of tapping information held by the public about their own constituency campaigns, such as the amount of activity being expended by the candidates and other rumours and political gossip, currently locked away in local networks.
At this level, polling companies are becoming promising conduits for the expression of mass-crowd wisdom, via their burgeoning online panels. These are growing at such a rate, it is not difficult to envisage that in 2020 we will be able to properly test sufficient sample sizes of voters in every constituency to measure their own understanding of their own local results. This is a hugely exciting development. For now, hardly noticed for his presentation beyond the walls of the LSE conference, we have the research of Andreas Murr, and his pioneering 2015 constituency crowd wisdom model. Whilst calling on only 25 voters per constituency this time around (in February), his analysis generated an estimate of 292 Conservative seats. This prediction is remarkable for its closeness to the current betting market. What is more, with ever growing online panels facilitating larger surveys at future elections, the accuracy of this type of crowd sourcing can only increase. If the total of Tory seats tomorrow morning hits the 292 total (or more), then the purest and most localised form of crowd wisdom will have scored an early and notable success, not least for being higher on Conservative seats than eleven of the twelve academic models. This may be at the start of a revolution in how we forecast not just elections but referendums and non-political events too. In the age of the Internet and massive online opinion poll panels, we can start trusting the public more than the experts.
The addition of a massive free-play public prediction tournament (100,000 players plus), rather like the hugely successful SkyBet Saturday Super Six game on football, conducted online, and yielding crowd sourced seat forecasts, would have added greatly to our understanding of mass public crowd wisdom on the election. Demonstrating a trust in public judgement that has been sadly lacking during the election, It could have been run by the media, bookmakers or both, pitching the wisdom of the widest possible crowd against expert opinion, extending the range of participants beyond the narrower confines of those who place real money bets on politics and their associated biases.
The mass public have also been tapped for their insights via focus grouping, as featured in Lord Ashcroft's regular polling reports. Sometimes however it feels this element of Ashcroft research is included to carry a joke or two about politicians or to add colour next to the dry survey reports, rather than uncover some serious insight on mass crowd wisdom. There have also been two monthly quantitative surveys by other polling companies, which include items looking at general election public prediction, one by TNS and the other by ICM for the Daily Telegraph / Guardian. The most recent was by ICM for the Guardian yesterday (6 May). Whilst the vote intention question showed support evenly divided between Labour and Conservatives on 35 per cent each, the prediction question suggested a three point victory margin for the Conservatives on 35 per cent versus Labour on 32 per cent.
The wisdom polling asks the respondent to predict the next government or Prime Minster. The problem with national 'wisdom' polling, as with the mechanism of Uniform National Swing that translates vote shares into seat shares, is it may obscure critical patterns in local support at the constituency level that have developed as our two party system has fragmented. To garner information from the local publics on this key local information, more promising are 'local MP' questions, with the hope of tapping information held by the public about their own constituency campaigns, such as the amount of activity being expended by the candidates and other rumours and political gossip, currently locked away in local networks.
At this level, polling companies are becoming promising conduits for the expression of mass-crowd wisdom, via their burgeoning online panels. These are growing at such a rate, it is not difficult to envisage that in 2020 we will be able to properly test sufficient sample sizes of voters in every constituency to measure their own understanding of their own local results. This is a hugely exciting development. For now, hardly noticed for his presentation beyond the walls of the LSE conference, we have the research of Andreas Murr, and his pioneering 2015 constituency crowd wisdom model. Whilst calling on only 25 voters per constituency this time around (in February), his analysis generated an estimate of 292 Conservative seats. This prediction is remarkable for its closeness to the current betting market. What is more, with ever growing online panels facilitating larger surveys at future elections, the accuracy of this type of crowd sourcing can only increase. If the total of Tory seats tomorrow morning hits the 292 total (or more), then the purest and most localised form of crowd wisdom will have scored an early and notable success, not least for being higher on Conservative seats than eleven of the twelve academic models. This may be at the start of a revolution in how we forecast not just elections but referendums and non-political events too. In the age of the Internet and massive online opinion poll panels, we can start trusting the public more than the experts.