Showing posts with label betting markets. Show all posts
Showing posts with label betting markets. Show all posts

Friday, 1 July 2016

Libor Mk2? Were the betting markets on the EU Referendum fixed by The City to manipulate FX markets?

Wild political and economic times breed paranoia and conspiracy theory.  Right now, people are saying the Sunday Times is sniffing around about a big story.  A gigantic story.  Elements in the City attempted to rig the relatively illiquid betting markets on the day of the EU Referendum.  They wanted to send Foreign Exchange markets the wrong way – i.e. strengthen the Pound – so they could sell it at high levels and make a killing when it collapsed with a Brexit vote.  Their losing outlay on Betfair was modest compared to what they could win on the vastly more liquid FX markets.  Is this true? 

Were city traders political villains?
Allegations of market manipulation at Betfair in particular had been circulating on Twitter for weeks prior to the vote.  Arch peddler is a bĂȘte noire of this blog, Professor Leighton Vaughan Williams (@LeightonVW), Director of the Betting Research Unit and Political Forecasting Unit at Nottingham Business School  – although he suggested manipulation was in the other direction.  He insinuated Leave were being ‘bigged up’ in the markets, and merrily revealed how he was hoovering up value bets on Remain. 
Just hours before the Brexit result became known on June 23rd, 'Remain’ traded as short as 1/10, a 90.9% probability that Britain would stay in the EU.  Were the public sucked into believing that Remain were winning, hands down?  Was the social media narrative of a failing Leave campaign - on the crucial day - a bogus construction of some pernicious speculators?  Like lemmings, did punters place losing bets on Remain, thinking it a slam dunk certainty?  One £100,000 bet was reported at 1/10 from a member of the public. 

Far more seriously, did the actions of some corrupt traders influence how people voted?  Was the dark hand of the City attempting a more audacious plot, beyond winning a few quid and queering capital markets, to send voters the wrong way?  News of the odds were public as voters made that thoughtful trudge to the polling booth.  With a Remain victory looking like a near certainty, perhaps voters either wanted to be on the winning side, or felt some new assurance in a Remain choice, given their fellow voters were for staying in, in the very hours of casting their ballot.  If the suggestion of betting market rigging is true, this could be a Libor Mk2 scandal with political knobs on.  The vote for Leave might have been larger.  Our political future could have been deranged.

For all the naysayers, betting markets have a credibility in the public mind.  There is good reason for trusting those bookies.  This blog believes that markets should usually be our most reliable guide to future political outcomes, in fact any sort of future outcome. 

Unlike answering an opinion poll question, the act of staking personal cash can be a tax on stupidity - if you lose.  Bettors pay more attention.  The betting market is like a vast information storage system.  It draws information from every nook and cranny of the public conscious.  Every speech, every interview, every slip, every piece of punditry, every economic indicator, every pub conversation, every grating piece of’ Vox Pop’ from the BBC.  In it goes into the mixer.  And yes, dear old polling goes in too.  It’s trusted.  But maybe less so now.  The verdict on polling's value gets mashed, whisked, sieved and weighed with all other indicators.  A mass of punters attempt to sort the wheat from the chaff, motivated by the power of self-interest.  This is the fullest and most reliable aggregation of our public collective wisdom. 

No one television commentator could compete with it.  During the campaign, Laura Kuenssberg et al. should have just read out the moving odds for Remain and Leave, with their implied probabilities to make it really simple, and let it rest there.  That would be our best guide to what was going on.  Unfortunately the professional journalists don't talk odds much, particularly the lofty TV ones, it usually gets in the way of their own take.  They also loathe betting as a dirty business, probably thinking they have a responsibility NOT to mention it.

But what if this actually beautiful process of public judgement making, political betting, was being buggered by nefarious capitalists from the City?  Did it happen?

The short answer is that only Betfair and the bookmakers will know who was staking what.  I worked at Betfair and others.  They know.  Just as they have been excellent in addressing match-fixing questions, the onus is on them again to tell the public what was going on.  Prior to the investigation however, I will make two points in defence of betting markets.

Firstly, this is indeed a mad and paranoid story.   The problem with it – as a conspiracy – is why would a small group of traders waste probably hundreds of thousands of pounds, probably millions, to provide a indivisible benefit for many other City speculators aware of what was going on.  How was it all organised?  And how could they be so sure that Leave would win?  The final polling was dire for Leave.  Word has it the City ran its own exit polling, but decent exit polling on a one-off Referendum is beyond the capabilities of even the best British political scientists; not just hard but hideously expensive too.  Any investigation needs to examine whether there really was exit polling conducted privately, and what the results showed.

Secondly there is a lovely and even more paranoid theory to counter the market-rigging one.  Bookmakers have been getting increasingly peeved off about the Murdoch press, particularly the heavyweight Times and Sunday Times, running endless stories about problem gambling.  23 year old accountants jumping to their death after losing money playing online poker, and communities being destroyed by fixed odds betting terminals.  It's bad copy.  I’ve heard rumours – and they are just that – saying Rupert Murdoch is trying to weaken the existing industry prior to introducing his very own betting service in association with The Sun – “Sunbets”.  As yet, Betfair has largely escaped the campaign, but this new story isn't great for them.  

Personally, I don’t believe this particular conspiracy for a moment.  If the Sunday Times run the story they will believe it’s true.

Without pre-judging any possible future inquiry – and leaping forward several steps into the future, this is what I think actually was wrong with the betting markets – although I do not know for sure until Betfair and bookmakers reveal more information.  ‘Wrong’ is also a matter of degree.  Just because Leave drifted to 8/1 on polling day doesn’t mean that the market got it wrong, any more than the hugely liquid Grand National market got it wrong when Rule the World romped home at 33/1.  Markets aren’t wrong just because an outsider wins.  They are only wrong when outsiders win repeatedly and the fair share of favourites fail to win in their turn, at a rate implied by their probability expressed by odds.

To assess the value of political betting as a predictor of political events (against polling), the researcher must use long-term data, i.e. a large number of occurrences.  A large body of global academic research that shows that betting wins in the long run, and is indeed devastatingly accurate, material which I will gladly share.

But a nagging trend has indeed developed of outsiders winning political betting contests of late.  It is interesting.  Think of the 2015 General Election, not just the surprise Tory Majority, but the extraordinary performance of the Scottish Nationalists.  Ladbrokes may have lost up to £2m on that election, and William Hill reported a £1m loss at their official results.  Think also of the remarkable victory of Corbyn, once priced at 1000/1 on Betfair.  And Trump.  All losers for the bookies.

Bookmakers seem to be increasingly embarrassed – looking weak.  They have started to produce some proper hogwash to explain these results.  Worse, they have turned on their punters, their very own customers, to excuse their prices.  This is the official Ladbrokes explanation for the failure of their markets on the EU Referendum – and it's sad a once great company has descended so far:

 “there’s a huge amount of wishful thinking going on in people’s brains when they’re trying to assess the probabilities of these results”. 

So yes, their punters are stupid.  Ladbrokes’ founder, Cyril Stein, would be horrified.  Unfortunately, this sad explanation does not help explain why Remain was as short as 1/10 on the day of voting – assuming that their wishful thinkers were the more emotional (less rational) Leave punters.   Another now more complicated explanation now comes into play from Ladbrokes, equally damning of their own markets.  The problem Ladbrokes says, was a majority of (“wishful”) punters bet on Leave, but the really big bets came in for Remain, and the majority of the staking.  Again, Ladbrokes present a picture of their markets working badly and their punters unable to react correctly to prices.  We are also not told the vital information:   What was their company position on the main EU Referendum result, each day and overall?  Of course we would expect more money to be staked on an odds on favourite (Remain) than an outsider.  Despite this, Ladbrokes could still be taking a position with Leave, offered at longer odds.  

Were they with Leave or Remain?  Can the pricing of the Referendum better be explained by the bookmakers than the punters, including from the alleged nefarious City types? 

One thing is for sure, we know that Ladbrokes fancied Remain, revealing in the Times Red Box expert survey a heady prediction for the ‘inners’ of 57.01% (they got 48.11%).  I am guessing that much of the explanation why Leave was always too long in the betting – as I repeatedly mentioned during the campaign @mincer and see this blog passim – was that the William Hill and Ladbrokes wanted to bet against Leave, themselves.  Not content to eek out a nice little earner on a vast turnover by jobbing the market (i.e. balancing an equal profit on Leave and a profit on Remain), they wanted a little gamble of their own.  When the money came in from Leave punters, they didn’t move the price commensurately, as a neutral market operator would, but kept Leave longer and more attractive than it should have been.  When some lumpy bets came in for Remain on the day – they collapsed the price to unrealistic levels, really bad jobbing probably.  

Assuming the role of bad political scientists rather than proper bookmakers, they didn’t want to lay this money for Remain.  Graham Sharpe, Head spokesman at Hills, confirmed to me during the campaign that his company was taking a position with "Leave".  It wasn’t the “City Traders” who were taking the position – it was the bookmakers – just has they had wrongly against a Tory Overall Majority in 2015, Corbyn and Trump.  If there was “City money” entering the market on the day, the bookmakers accentuated the effect of it on the market price.  This may have had an effect on the Betfair market, because of course they are linked.  Any arbitrage between the two will be filled in. 

Only complete openness about their trading positions and P & L from Ladbrokes, Hills and others and other trading information from Betfair will resolve why the EU Referendum betting market on Thursday 23rd June behaved so oddly.  It is extraordinary that in such as tight two-horse race, at one stage, the betting markets suggested Remain had over a 90% chance of winning - as I pointed out on my Twitter account at the time. 

Of course this would have been an important indicator to global financial markets.  The question remains:  was the price of 1/10 a reflection of the weight of genuine money from the public for Remain, money from the City for Remain - either legitimate hedging or attempting to rig the market, or the traditional bookmakers taking fright and protecting their position - which favoured a Remain outcome?  

And was there any secret exit polling conducted,as alleged by city institutions?   The allegations circulating are completely unproved, as our my suggestions that the bookmakers were the real reason why the odds were all wrong, not the massive crowd of punters.  There may have been some big bets for Remain, from whom we do not know.  At the moment, there are more questions than answers, and openness from bookmakers is needed.

Wednesday, 18 May 2016

Bet "Leave" at 11/4

In recent days, several people have asked for my view on the EU Referendum result.  I have spent three years on a PhD looking at EU support in Britain and a lifetime in political betting, on and off as a political odds-maker.  I've also taken some serious hits.  The worst was a £16k loss on the 1997 General Election opposing the Lib Dems.  As a spread bet, for every seat they won over 30, I lost £1,000.  They got 46, including winning Winchester by one vote.  The first one vote winning margin since 1867.  Even today I wince as I drive past that place on the M3.  So here we go again.

This blog believes in 'crowd wisdom' as expressed by betting market indicators.  On a liquid market like the EU Referendum, we should expect the market to be a reliable guide as to what may happen.  Taking it on with a bet, together with the bookmaker margin, is unadvisable.  Unless you have a strong view that is.  And I think there are two things badly wrong with the market at the moment.

The first is bookmakers are taking positions with 'Remain'.  It is likely that Ladbrokes and William Hill have £1m+ liabilities on a 'Leave' win.  There are plenty of Leave backers out there, but generally the layers are not prepared to make commensurate changes to their odds when accommodating them.  This means their prices don't actually represent market sentiment, but their own trading floor views.  We've seen this before with the General Election, Corbyn and Trump, and each time they have come unstuck.  They think they know better. 

From talking to the current odds-makers, and seeing them operate, it's clear they are being 'advised' by political experts - academics and the like.  Matthew Shaddick, the political man from Ladbrokes, likes pitching up at academic conferences.  In an unholy alliance, he denounces the value of his customers opinions in front of an appreciative audience of expert political scientists, who also loathe betting as a predictive indicator.  At one conference before the 2015 GE, he stated that "political markets are not a good guide to what may happen".  So better take the expert view instead, the complicated academic models and frigged polling methods, with thousands of highly breakable whirring parts.  Received academic wisdom tends to think that the 'Government cue' is still strong, like it was in 1975, and that voters are uninformed and uninterested on Europe and will fall into line.  My own research of British Election Study data from 2008-2012 shows the opposite.  Voters are increasingly independent minded (volatile) and have responded to the crisis in the Eurozone (something Leave should be focusing more on).  Since 2000, EU referendums in Holland, France and Ireland have all gone against the pro-EU Establishment, much to their surprise.  Who is to say it won't happen likewise in more Eurosceptic Britain?

The academic experts and even pundits like politicalbetting.com are also poor oddsmen.  Firstly they are inherent favourite backers.  That is what betting unsophisticates do.  They think 'what is most likely to happen?' and go for it.  A bird in the hand is worth two in the bush.  But betting is about assessing chances against odds of reward, in short, value.   

Secondly, all the noise you hear on Twitter is either from fruit-bat 'Leave' evangelicals or reasoned liberals.  That is the image anyway.  Academia - showered in EU money - is pro-remain, and hangers on to their output, particularly those noisy on Twitter, like politicalbetting, are well known liberals too.  In short, 'informed social media' doesn't want Leave to win, and doesn't want to back something it despises.  The same happened with the Tory victory, Corbyn and Trump, whilst the betting public felt differently - and won.  Ladbrokes lost £2m on GE 2015.  An extraordinary failure when they had so much two way business to eek out a profit.  This was a golden opportunity for bookmaking to prove its worth against polling, instead it just followed it, breaking the golden rule that the market knows better than a handful of traders.     

Lastly, I think a bubble for Remain is developing.  This is most evident in the Times Red Box Survey, where the great and the good of British Politics (and the public) are asked to submit their predicted vote percentage for Remain (average is 54%).  It's revealing.  For example, Matthew Parris thinks 62.5% which proves he is unhinged.  But surely, people are going to be influenced by what others have entered, even Parris and a phalanx of other liberal insiders, kindly marked up with an asterisk on the site so we can pay them special attention.  The exercise lacks one key criteria for a 'Wise Crowd' - independence of judgement.

Then there is Betfair.  People will say that this is a perfect betting market and it doesn't involve bookmakers taking positions, advised by their academic advisors and powered by their own egos.  This is just wrong.  The exchange market reflects the strong traditional fixed odds market.  Arbitrages between the two are soon filled in.  And the old world bookies are far less likely to move their prices.  This is why they build up their liability positions.

Set against the noise of this betting event, and structural reasons why bookmakers may be underestimating Leave chances in their prices, there is some solid evidence to examine.  Pollsters are all over the place and have lost credibility.  But one thing about their averaged results can be relied upon - they will be consistently wrong over time.  We may not know just how well Leave and Remain are doing at any one moment including now, but over the days and weeks we can see how things are moving for the two camps (assuming no methodological changes by the pollsters).  Here there is a clear trend.  Leave are winning this campaign.  Look at the remorseless rise of the red Leave line to now challenge the blue Remain line.  This trend matters hugely, because unless there is some fundamental change to how the campaign is being conducted, it is likely to continue.  And that means - from the graph below (courtesy of Prof. Harold Clarke) - Leave could soon start to overtake Remain.  (Update 13/06/2016 - And they now have).

(Updated 13/06/2016)



So what is the advice for someone who wants a bet?   Look at Oddschecker.com.  11/4 is generally available about a Leave vote.  Have a go and pick up a free bet at the same time.  These odds suggest there is only a 26.7% chance of Leave winning, and 73.3% chance of Remain winning.  It is worth checking these percentages against the full gamut of prediction percentages on the excellent https://electionsetc.com/

Thursday, 7 May 2015

Predicting GE 2015: Which group of forecasters should we rely upon: 'the expert crowd', 'the betting crowd' or the 'mass public crowd'?

Author note: Albert Tapper (@mincer) was political market-maker at the Sporting Index spread betting firm for the 1997 General Election and is now researching Ukip and this year's General Election for a forthcoming book by Dr Matthew Goodwin and Caitlin Milazzo.  
 
Postscript, Sunday 10th May 2015: 

The article below was written on Thursday 7th May, before the polls had closed on this extraordinary UK Election. Then the unexpected result came through, met by universal shock.  Against all the expert forecasts and final odds of 14/1 on Betfair, the Conservatives won the 325 seats required for an 'overall majority' in the House of Commons, delivering one of the greatest upsets in British electoral history.  David Axelrod, President Obama's election strategist described it as the most stark failure of polling that he has ever seen.  The British Polling Council (BPC) called an immediate inquiry.  The final tally of 331 Conservative seats was 41 seats higher than the last betting market estimation from Sporting Index of 290 seats.  It was 50 seats higher than the daily average of 281 seats from the three main academic forecasters: Electionsetc.com, Electionforecast.co.uk and the Polling Observatory  (16 April 2015 - 6 May 2015 inclusive).  The final prediction from Electionforecast, released on the 7th May was also for 281 Conservative seats.  Given these forecasts and the betting markets had barely moved ten or fifteen seats in the preceding six months, the scale of the sudden movement was stunning.  If, that is, the change was 'a sudden movement' or 'late swing'. The Conservatives may have been leading all along, just never measured as doing so.  The 'stewards inquiry' by the BPC will discover this and its causes (Lib Dem / Labour 'switchers' or Ukip 'penumbra'?)  Either way, the betting markets and the academic forecasts had got the result badly wrong, particularly the 'expert' academics.  Both groups will struggle to pass blame onto the miserable collective effort of the pollsters, upon which their forecasts were very largely constructed.  The emphasis given to polling data in any forecast model or individual voter or betting decision is intrinsic to the quality of that decision, each forecaster has the power to reject such information.

(Footnote: The betting market figures are taken solely from the Sporting Index spread market, the most liquid of all political betting markets where seats are traded like a share price.  By contrast, the seat forecasts from fixed odds bookmakers such as Ladbrokes and Betfair are derived of probabilities from their relatively illiquid constituency betting markets and are therefore a less reliable guide to crowd wisdom.  Crucially these fixed odds company predictions are not tradable and are produced for public relations purposes only.  Unsurprisingly, during the course of the campaign these poorer predictions have been in regular arbitrage, notionally at least, with the tradable spread betting markets.)    

In this regard, this article provides new insights into how the betting markets and the expert forecasts moved during the campaign, both relative to each other and over time.  On Twitter, the more vocal elements on both sides have struggled to conceal their antipathy towards the other at times.  Bettors have charged experts with arrogance and unworldliness, whilst experts readily dismiss bettors as simplistic or partisan.  No surprises then that large differences between the two sets of forecasts have opened up, despite their self-professed mutual reliance on the same (now suspect) polling data.  In the light of these differences and the systemic issues revealed in the polling, it is worth speculating whether the betting markets were, late in the day, becoming increasingly sceptical about the reliability of the polling, and if so why?  In short, the market could have grown wise with age by beginning to discount the polling info - whilst the academic forecasts made no change to the powerful influence of vote intention polling on their models.  One rationale for discounting, given prior to the event, comes from Matt Singh of Number Cruncher Politics, who argued that the relatively low vote share polling 'snapshots' for the Conservatives were incompatible, historically, with the relatively high polling numbers for Conservative 'economic competence' and 'prime ministerial competence' ratings.  No previous party had failed to win overall power with such good fundamentals.  If the market was making this judgement against the reliability of the polls, it may explain the last few day divergence between the betting line predictions and the expert forecasts (see fig 3 below) - although a differential was in fact opening up long before the last few days of the campaign.  This would be a rational or economic explanation for the divergence, as opposed to the common charge used against betting market accuracy of right-leaning bias.  Punters tend to be right-leaning, particularly the city based clientele of Sporting Index, and they like backing their favoured team, the Conservatives.  

Original article, Thursday 7th May (describing the performance of experts and betting markets in predicting the election):
 
Prediction matters


Getting political prediction right, counts.  From influencing capital markets to cutting through the bluster of political bandwagons, decent forecasts are also central to political strategy, like when to adopt riskier vote-winning plans or shore-up core support with basic messages.  Perhaps most importantly, voters must also judge correctly how well parties are faring if they are to vote tactically.  As Stephen Fisher of Oxford University has pointed out, in the ultra-competitive contest of GE2015, new types of tactical behaviour are emerging called ‘coalition-directed voting’.  An example of this are Labour voters supporting the Conservatives in the short term, against a feared SNP bid for a coalition with Labour, so long as Labour aren’t doing sufficiently well to get an overall majority themselves (in which case the SNP would be less of a threat).  Where though, should these voters turn for a reliable prediction of the result to inform such tactics?  Are the betting lines or the expert academic models best?  Is there an alternative source of crowd wisdom beyond the potentially biased judgements of academics and betting crowds? 

From late 2014, I began noting down the daily forecasts made by three separate groups.  Firstly a collection of academic 'expert' judgements, secondly the forecasts of groups of ‘punters’ reflected in betting markets, and finally, a more random sample of the mass public predicting their own local constituency results.  Analysing this basic data, I ask now which group has been leading or following the other?  The results show that academics have done well in leading insights on the high likelihood of an SNP landslide in Scotland and also a hung parliament outcome, should these outcomes materialise.  However the experts and the public still diverge on the relative performance of Labour versus the Conservatives.  Because of this, there are going to be very public winners and losers when the actual results become known from tomorrow.  In summary, if the Conservatives beat expectations and get in excess of 290 seats (and Ukip get more than two or three seats), then it will be a vindication for the betting market and mass-public crowd.  If Labour does well, say more than 275 seats, the academic crowd will be closer to the actual result.  Either way, we must remember this is only one skirmish between each crowd – just one roll of the dice.  One side will have won a battle and not the war, although it may feel worse for the losers tomorrow morning.  The debate will rage on about which crowd is the wisest.

Experts v The Public

Before the final definitive verdict tomorrow, your instinctive preference for either the expert or the mass public view will probably depend on a hunch rooted in one of two alternative views of human knowledge.  For millennia man has debated whether the best guide to the world around us is the knowledge or reason of the few (a Platonic tradition), or the practical experiences of many (an Aristotelian tradition).  Suffice to say here, some forecasters believe future voting behaviour is a highly technical matter, which is best understood by a few experts, capable of weighing up the validity of key theories and managing masses of data to refute or corroborate them.  The mass public wanders ignorantly, befuddled, helpless, deaf and dumb, without discernment.  If you believe this elitism, then you probably follow the academic crowd for now, until they are proved right or wrong by the results.  Alternatively, you may feel sceptical that a collection of academics, or punters, can be sufficiently diverse and independent in forming their opinions, and are more prone to either behave in a partisan or ‘herd-like’ manner.  In this case you are a pluralist or a 'liberal' in its truest meaning.  You may also feel that voting behaviour is too complicated and delicate a matter, driven by a mass of local-level (constituency) information that is beyond the comprehension of a bunch of academics and their whirring models, or even a crowd of incentivised bettors.  If you take the argument this far, you are probably an economic liberal like Margaret Thatcher's favourite economist / philosopher, F.A. Hayek.  There is certainly a strong element of Hayekian epistemology (theory of knowledge) in why betting markets are superior to the cleverest individuals at prediction: the environment of the voting decision is like a vast information storage system, which can't be fathomed by any single individual, like the mass of supply and demand cues of an economic market cannot be understood by one 'planner'.

In this case, somewhat bravely, you could turn to the crowd in its purest form, trusting mass publics across every constituency to tell you, collectively, who they think will be their next MP.  This is perhaps the ultimate compilation of localised crowd-sourced intelligence in a British election.  For the first time in electoral studies, online polling technology is revealing new insights into these predictive views of the mass public, allowing us to ask more of them, more regularly and more cheaply.  Much has been made about the increased volume of polling on how people intend to vote because of this online polling revolution.  In the last six months of 2009 there were just 103 vote intention polls.  In the last six months of 2014 there were 283.  But the vote intention question is mainly fodder for the academic models and betting calculators, as is the welcome addition of Lord Ashcroft’s constituency polling by telephone, recognising the need to find local patterns in voting swings in a post-UNS (uniform national swing) multi-party election world.  By contrast, the true ‘crowd wisdom’ question is not how people will vote, but who do they think will win?  To do this on a constituency-by-constituency basis, to mine a new mass of localised knowledge, requires vast national surveys calling on enormous online panels. Even YouGov’s 600,000 strong UK panel is not really big enough for the job, yet.  This type of crowd forecasting will improve with time, as we are able to ask sufficient volumes of people in each individual constituency for their predictions.  The key assumption is Condorecet’s Jury Theorem which states that if group members have a greater than fifty per cent chance of making the correct decision (they have at least sone wisdom), then the probability of a correct majority vote will increase rapidly towards unity as the group size increases to infinity (Condorcet, 1785, Murr, 2015).  So if the crowd are 50.001 per cent right then that is good enough.  The prediction will only get better as more people are asked the question.  Unfortunately, the same logic works in reverse.  If the crowd are 49.999 per cent right, the crowd will get more reliably wrong as members are added. 

Despite a rather small sample for the job, this idea got an early run out at this election thanks to the fascinating work of Rookie political scientist Andreas Murr of Oxford, who drew on an internet survey of 17,000 voters conducted by YouGov in February of this year (that is an average of just 25 members of the public in each constituency).  If it does well, then the future of predicting elections could be about to change.  We may become attached to asking vast online panels what they think will happen, harnessing new seems of local information in vital local areas previously passed over previously by group of experts.  Online panel growth is the disruptive technology that may be changing not only how future political events are forecast, but sporting ones too.  Swap 'parliamentary seats' for 'Division 2 season points' and the same benefits from exercising crowd wisdom apply.  The wisdom of crowds idea, made applicable because of the internet and growing online panels, dangles valuable new insights for bookmakers and the betting public alike.  And the online panel companies understand this.  They consider their panels not just a source of predictive information, but also the basis of new social communities of predictive activity, a sort of gambler's Facebook. Rather than asking panellists boring survey questions to obtain obscure football league information and aggregated wisdom, why not offer them the chance to reveal their knowledge by playing games at the same time, such as Fantasy Football Manager?  The growth of online poker showed how gaming communities spring up quickly between ego driven poker players.  Predictive sporting tourneys may be next.  YouGov are the most advanced British online panel company to attempt the 'gamification' of their panel experience, in a bid to morph from market research firm to social media giant.  It faces stiff competition from the bigger American firms such as Research Now, who now have over six million global panel members.  It remains to be seen whether British YouGov have the capacity to out innovate their American competition to grab the bigger prize.  I shall return to the constituency level crowd wisdom model at the end of this article.

Expert Academics

Back to March of this year, just over two months before the election poll.  The scene is a large purpose-built lecture room at the London School of Economics.  Like a sugar addict in a sweet-shop, I was enjoying my favourite day in academia since leaving the bookmaking industry in 2009.  The subject of the scholarly inquiry was prediction, but the personnel involved were far removed from shift-race-goers or grubby betting shop types. 

Sixty of the world’s finest political scientists specialising in voting behaviour and opinion polling were having their traditional pre-election day get together.  Here were the brightest collection of individuals I’d seen assembled under one roof.  And they were staking their hard won reputations, not money.  Many had flown in from around the world to review a dozen different seat forecast models predicting the UK General Election, the blue-ribbon political horse-race of the global elections calendar.  The winning model would be the one that contained least ‘total seat error’ between predicted seat totals for each of the parties and the actual result on May 7th. 

This year the contest had the makings of a classic. Heightened interest among the betting public was making it the biggest non-sporting betting event of all time, with turnover up three-fold on 2010, according to Ladbrokes.  The British contest, already tight, had become even more of a challenge, spiced by the presence of two live dark horses in the field with little previous form: the SNP and Ukip.  The extent to which these unknown variables could weigh heavily on the voting result was causing much debate and uncertainty.  Many academics who had based their careers on the reliability of the grand old model of UNS (Uniform National Swing) were feeling sombre.  There was a deep concern that the British first-past-the-post system might not be able to cope with the new multi-party dynamics.  These concerns it was felt, were made more serious by the central implication of the new Fixed Term Parliament Act, designed to facilitate the last coalition government, which makes an emergency ‘ad hoc’ election now more difficult to call.  Therefore the academics worried.  The British polity could be condemned to five years of unstable and ineffective government based around vote-by-vote deals, uneasy coalitions or delicate ‘confidence and supply’ arrangements.

The academics faced a quandary.  As top forecasters, they recognised the deep uncertainty of the General Election result, but also knew that in the current academic era, there was public demand for a confident forecast.  Deep down, they felt there were no simple answers, no one idea that explained it all.  In the terminology of expert political judgement (Tetlock, 2005), those in the room were ‘foxes’ not ‘hedgehogs’, scrappy creatures who believe in a complicated synthesis of many little ideas and multiple approaches towards solving a problem.  The best electoral forecasters in the world had not gained their reputations through one knock-out punch, but hard research and gradual learning.  But now journalistic simplicity was required of them, whilst maintaining their honesty.   Their industry was booming and gold was on offer.   The political science of predicting elections was in rude health, popularised by Nate Silver’s best-selling book ‘The Signal and the Noise’ (2012).  And in an era where academics are demanded to make an impact on wider society to secure research funding, a premium is placed on being ‘outward-facing’.  Torn therefore, by intellectual modesty and humility on the one hand, and public demand for simple answers on the other, some stars of the field are emerging within British political science who can do both.  Among those presenting models was Rob Ford, well known for his book ‘Revolt on the Right’ on the rise of Ukip, co-authored with Matthew Goodwin which won the Political Book of the Year in 2015.  (Matthew is writing an eagerly awaited update book on Ukip, for which I am a researcher).  Also demonstrating his model was Chris Hanretty from the UEA and his team (Benjamin Lauderdale, Nick Vivyan and Jack Blumenau), whose electionforecast.co.uk is now an integral part of the BBC’s Newsnight coverage, and who also provides the UK election model for Nate Silver’s booming FiveThirtyEight US media business.  Presiding over the event were the eminent John Curtice and Simon Hix, with guest of honour, Sir David Butler, inventor of the Swingometer and a founding father of psephology, now 90.  In short, here was the ‘expert crowd’ of GE2015 in one room.  If good judgement on the General Election is related to high IQ and years of learning, this was the place to find it. 

Fig. 1 ‘The Expert Crowd’.  The young rising stars of British political science (in background, Chris Hanretty, Rob Ford and Stephen Fisher) are grilled about their General Election predictive models by older established names, John Curtice, Sir David Butler (90), Simon Hix (standing) and Vernon Bogdanor – Foreground




The twelve predictions are listed in table 1 below, along with the remarkably similar results of an ‘expert survey’ of 465 Political Studies Association academics, 45 journalists and 27 pollsters.  Of the forecasting conference predictions, all predict a ‘hung parliament’, half with the Conservatives and half with Labour holding the most seats.  The average seat prediction shows Labour winning just four seats more than the Conservatives (283 versus 279).  Ukip are predicted to win three seats, the SNP forty-one and the Liberal Democrats twenty-one.  This would be a disappointing result for the Conservatives, with the party falling short of being able to cobble together a minority government.  Assessed against the final polling it looks a fair collective judgement, although it may prove to be slightly short of the SNP and Conservative seats and long of Labour.

Table 1. ‘Expert Crowd’ predictions 2015 Forecasting Conference, LSE, 27th March 2015



 
For all the wisdom of these academics, what possible reasons may exist to doubt their aggregated judgement as a ‘crowd’?

I think there are two possible doubts about the claim made by the researchers of the Political Studies Association expert survey (Chris Hanretty and Will Jennings) that averaging academic forecasts engenders ‘wisdom of crowds’ benefits.  The first problem is the diversity and independence of the group.  Both these factors are fundamental assumptions of James Surowieki’s theory, discussed in an excellent early podcast (2004) from the man here, yet the congregation at the LSE in March and the Expert survey of PSA members looks little more than unrepresentative ‘expert panels’.  In short, the models and the judgements of both may be drawing on insufficiently narrow sources of information, concentrating heavily on vote intention opinion polling, compared to the information resources utilised by the mass of the general public up and down every constituency in the land.  The fear exists that the expert models sink or swim with the polling, however well aggregated and weighted, along with some adjustments for historical behaviour such as a late Conservative rally or ‘reversion to mean’ (voters tend to fall back on what they did before) which turns ‘snapshots’ of polling into predictions of future results.  Worse, a certain ‘group-think’ or ‘herd mentality’ may be in-play, making the academic crowd not independent of each other.  In particular, for the members of the Political Studies Association, their high level of political knowledge may be of each other’s well publicised work, continually circulated to each other, particularly the models of the Polling Observatory, Elections Etc, and Electionforecast.co.uk.  Yes they have produced different forecasts, but to what extent are these differences the result of widely sourced data, or merely subjective adjustments to their working parts, or their assumptions?   Voting behaviour scholars from America, particularly Michael Lewis-Beck, made this objection to the British models at the LSE conference.  He felt there were just too many ‘moving parts’, too many formulas and assumptions in the models for the results to reflect the data rather than inevitably the forecasters subjective viewpoints.  And here we come to the second problem.  Potential bias inherent within the academic community.

Academics have a habit (often irritating to the public) of insulating themselves from the charge of bias by claiming superior knowledge, in particular command of their own data.  More than often it is justified, but it is a problem when it this becomes almost cultural, to the extent that new information from other sources can be dismissed without proper consideration.  The charge here is of a certain intellectual arrogance or hauteur, and it's not hard to find among political scientists, however ‘fox-like’ they are when approaching their own data.  Take for example a letter that was sent among an elite sub-group of the PSA members that specialise specifically in voting behaviour and elections (EPOP), about the relative results of their own grouped opinions in the recent PSA survey.  Somewhat contemptuously, it reminded the rest of the group that:

Colleagues will have noticed the PSA’s survey of election experts last week… (We) have separated out the predictions from those who actually know things about elections (ie, us) from those who don’t (everybody else).


Whilst this may have been ‘tongue-in-cheek’, there is little hiding the considerable doubt held by academics about the quality of public judgement more generally, a theme running through voting behaviour work dating back to the seminal 'American Voter' studies of the 1960s.  These painted a picture of the public as intellectually disorganised, affective in their motivations when not largely uninterested and uninformed about representative government (Campbell et al, 1960; Converse, 1964).  Why should this group be any better at predicting politics than they are practicing it, as measured by the public forecasts implied in betting market prices.  Here is not the place for a debate about whether betting prices benefit as ‘opinion backed by money’ or mislead with a republican / conservative confirmation bias, simply to show for now the arbitrage or mismatch of estimates between some of the academic models and the public markets.  Most startling was the initial forecast of the Polling Observatory, which opened on the 16th March forecasting 265 Tory seats whilst the leading city spread betting firm, Sporting Index, was forecasting 282.  It is hard to say whether there is partisan bias in the markets or the academic forecasts from this alone, but the arbitrage between the two throws some caution on both, a dispute which will only be resolved by comparisons of a long run of forecasts with results.  For now, whilst the academics can charge the markets with a pro-tory bias, the bookmakers can equally claim that the academics do not represent national political opinion either, as this self-selecting survey, conducted by the Times Higher Education Supplement this April showed.  Just four out of 1,019 respondents said they supported Ukip, eleven per cent Conservative and forty-six per cent Labour.  

Without knowing the final result as yet, how can we resolve who has performed better over the course of the election campaign, and therefore shed light on who is less prone to bias or wiser to information: the betting market crowd or the academics?   One method is to look at which side has converged in its predictions on the other (indicating that it is following), or whether one side has diverged in its opinions (indicating it is opposing the other), or whether they have mutually converged on each other in their estimations as the campaign has progressed (indicating increasing agreement). 

Con v Lab

Firstly, let’s look at the question of bias within betting markets, favouring the right because punters tend to be conservative leaning, and also within academic crowds, because public sector university workers tend to be much more left-wing than average.  In fact, as figure 2 shows, there is actually very little disagreement between the average of the three academic models predicting Conservative seats, and the betting market from city spread firm, Sporting Index.

Figure 2



The betting market average line for the last 50 days of the campaign is just 4.4 seats higher than the average for the academic lines.  The model from Stephen Fisher has indeed been more consistently bullish about Conservative chances (averaging 290.8 seats) than the Sporting Index market, averaging 285.5 seats.  However the academic average is lowered by the bearish forecast from the Polling Observatory, averaging just 269.5 seats for the Tories over the last part of the campaign.  Despite drawing on pretty much the same polling evidence, a range of over twenty seats between the bottom academic forecast of Tory seats and the top, must leave the public wondering who to believe?  The charge that a set of subjective assumptions is ultimately driving these models, which are over-laden with ‘moving parts’, has added weight.

At this stage, I am dropping the top and bottom forecasts to concentrate on the middle forecast of Hanretty, Lauderdale and Vivyan (electionforecast.co.uk) in comparison with the betting lines from Sporting Index, as it seems closest to the average of the academic prediction, and therefore fairly representative.  Figure 3 shows the daily over-time trends in both lines since 1st December 2014.  How have the two predictions related to each other over the course of the long campaign?  Firstly, both lines are clearly close to each other in forecasting Conservative seats, and it is not immediately noticeable that there is much difference in the predictions prior to mid-March.  In the first part of the date range between 1st December and 15th March, the daily average for each was between 282.4 (academic) and 282.2 (betting).  It is only in the run-up to polling day from the 16th March that the lines start to diverge, as the added volume of money for the Conservatives pushes the betting line above the academic forecast line, ending nine seats higher on polling day at 290 seats versus 281 seats from electionforecast.co.uk.  Here there is genuine disagreement between the betting crowd and the expert crowd.  Was the market coming to distrust the static polling lines?  Bettors think that the polling, by itself, underestimates the Conservative seat total by around 14 seats (using YouGov’s ‘Nowcast’ for 6th May of 276 Conservative seats which translates their polling numbers into seats on that day).   The academic line, driven largely by polling, also overestimates the final Tory seat tally compared to the YouGov 'Nowcast' estimate, because it also expects a last minute rally for the party.  But compared to the YouGov number, this is only five more Conservative seats.  In a tight contest, who is right in this dispute could be crucial, and we will only know tomorrow morning.

Figure 3



SNP

The battle between the betting crowd and the expert crowd is multi-faceted at this election, it does not just depend on who gets the balance between Conservative and Labour seats right.  One of the major stories of the campaign has been the rise of the SNP.   Here the picture paints expert wisdom in a favourable light to betting market wisdom, because the forecast of electionforecast.co.uk was spectacularly efficient in estimating early – way before the betting markets – a deluge of extra seats for the SNP.  Figure 4 shows how the betting market line of Sporting Index SNP seats took a while to converge on the expert academic line.  This moment was one of the major opportunities to make money in this campaign and shamelessly, I must admit to flagging it up here, at the beginning of December, although I didn’t have a bet myself!  The betting public have been reluctant to join the SNP bandwagon, and once again, this may be because punters (south of the border at least) have been reluctant to support the Scottish Nationalists for partisan reasons.  It is noticeable that once again, in the dying days of this campaign, the Sporting Index betting line has dipped below the academic line.  This shows there are plenty of SNP sellers about still, and the firm Sporting Index, may be in the enviable position of cheering on SNP wins, whilst shedding tears at tight Conservative victories tomorrow morning.

Figure 4



 The question of whether there will be a hung parliament, has also been central to this election campaign.  One of the psychological traits of the punter is an aversion to cheering on draws.  Bettors like excitement and dramatic results, sometimes letting their hearts rule their head.  When I worked at either Sporting Index, Ladbrokes and Betfair, a 0-0 score in football was usually the best result for the layers (the bookies).  For this reason, ‘No goal-scorer’ (includes own goals) is usually your best bet in any football match, without looking at any form, because it is the most unpopular one.  For the same reason, punters in this election have been keen on betting on an overall majority, in particular a Tory one, on fixed odds lines and spread lines.  This would be a terrible result for the fixed odds firms in particular.  Because of this, the line denoting the betting crowd’s prediction of a hung parliament has been consistently lower than the electionforecast prediction, which now makes it a 100 per cent certainty.  Punters still hold out for a 6 per cent chance of a Tory majority and it is only in the last few days that it has dropped below the 10 per cent mark, seemingly for good.  Figure 5 shows that betting wisdom, or lack of it, has gradually converged on the consistent view of the experts (the black line), that no party will win 326 seats required for a majority in the House of Commons.  As with the case of the SNP, following expert wisdom on the question of a hung parliament would have been lucrative, and there is still room to oppose a Tory majority for some cash.


Figure 5



Ukip

 Finally we turn to the question of Ukip, which has been the subject of my own research during this election for Drs Matthew Goodwin and Caitlin Milazzo’s forthcoming book.  Has expert wisdom or public wisdom proved a more reliable guide here? (See figure 6).  This is a more complicated story.  The graph shows that both the expert predictions of Ukip’s vote share (electionforecast in mauve and electionsetc in khaki) and the recently established vote share market from Sporting Index in red, hold out for an impressive Ukip performance of around 10-14 per cent of the vote, up from 3.1 per cent in 2010.  This prediction may have declined slightly since the 1st December 2014 and in the last few days, but the estimated Ukip vote has hardly been 'squeezed' as some commentators have suggested.  The betting market prediction for Ukip seats has however collapsed, from over ten at the start of December, to less than 3.5 now.  Both the academic forecasts shown in the graph for Ukip seats (in light blue and black) had never been higher than five and the electionforecast prediction never higher than two.  So we might think that the market has been unwise, and overly exuberant about the ability of Ukip to convert seats into votes, the opposite of its attitude to SNP seats.  One interesting psychological aside here is the predisposition of Ukip voters to take risks.  This relationship is shown clearly in all British Election Study surveys that have asked the question.  In short, Ukip voters, like their leader are punters through and through, and have enjoyed backing their candidates around the country, seemingly against their realistic chances. 

However, this criticism of the Ukip betting crowd, draws heavily on hindsight and ignores how the campaign has played out.  Farage has failed to spark a serious media bandwagon, perhaps because the SNP took it (analysis of media citations of party leader names over the course of the campaign, suggests this).  When Sporting Index opened their market, I discussed the matter with their market-maker (Aidan Nutbrown), and there was considerable uncertainty about what Ukip could achieve, as well as recognition there would always be buyers of Ukip out there.  Ladbrokes even had a market on the party gaining 100 seats or more.  The situation was too uncertain for the betting market to apply a strict mathematical formula of votes to seats, like with the academic models, because expectations existed at the time that they could break through.  Further, there are strong reasons that the electionforecast model was not recognising local campaign effects which could play into the hands of Ukip.  Whilst a small and inexperienced party at campaigning, it could at least target ten or more seats relatively hard.  Difficult to assess micro level factors were never really considered by the largely macro models of electionforecast, and arguably still aren’t.  It is still quite possible that Ukip could win 3-4 seats, 3-4 times what electionforecast have consistently predicted.  Like the Conservative / Labour seats battle, the Ukip question will only be resolved tomorrow.
 Figure 6.



Conclusions

The research above has looked at whether the ‘expert crowd’ or the ‘betting crowd’ has performed better as a guide to the likely election outcome.  On certain questions, requiring a high level of quite technical political knowledge, notably SNP seats and whether there will be a hung parliament, the experts have led the way.  The betting public have gradually come into line with what the experts have been saying, pretty much all along.   However, looking at the Labour-Conservative contest, and whether the betting crowd has displayed a right-wing bias or the experts a left-wing one, it is perhaps still too early to say.  We will have to wait for the result to assess the bullish betting market forecast about Conservative seats, relative to the expert academic forecast.  One notable feature of the academic models at least, as opposed to the survey of 500 'trade association' members, is the remarkable variance in their forecasts. Taken individually, they leave the public and potential tactical voters none the wiser as to what will happen.  The average however is close to the view of the betting markets (just 4.5 seats lower), so the bias theory in either direction may be overstated.  Forecasters are as keen as the betting public to get this election right.

Lastly, let’s give final consideration to the mass public as a whole - the voters who will decide this election.  Although regularly reviled for their lack of wisdom and herd-like behaviour ever since the damning 19th Century crowd studies of Gustave Le Bon, the mass public are the largest group and inherently diverse and more independent when compared to academics or bettors.  However, their view, with the exception of endless anecdotal TV and radio interviews particularly by the BBC, has often been drowned out in this election by elite opinion echoing through social media channels.  Bookmakers too have been guilty of not giving voice to their punter’s views, instead talking about their own opinions, as if the views of their traders and PR men matter more than their customers.  Good bookmakers, very rarely ignore the betting behaviour of their customers.  Ladbrokes in particular, via its otherwise excellent spokesman Matthew Shaddick, has been all to keen to rubbish the crowd wisdom behind political betting, also a cardinal sin in bookmaking PR.  For example, on 27th March he said that betting markets are 'not a particularly good predictor of the results'.  If a Ladbrokes spokesman is not capable of standing behind the inherent forecasting value of his own markets, who is?

The addition of a massive free-play public prediction tournament (100,000 players plus), rather like the hugely successful SkyBet Saturday Super Six game on football, conducted online, and yielding crowd sourced seat forecasts, would have added greatly to our understanding of mass public crowd wisdom on the election.  Demonstrating a trust in public judgement that has been sadly lacking during the election,  It could have been run by the media, bookmakers or both, pitching the wisdom of the widest possible crowd against expert opinion, extending the range of participants beyond the narrower confines of those who place real money bets on politics and their associated biases. 

The mass public have also been tapped for their insights via focus grouping, as featured in Lord Ashcroft's regular polling reports. Sometimes however it feels this element of Ashcroft research is included to carry a joke or two about politicians or to add colour next to the dry survey reports, rather than uncover some serious insight on mass crowd wisdom.  There have also been two monthly quantitative surveys by other polling companies, which include items looking at general election public prediction, one by TNS and the other by ICM for the Daily Telegraph / Guardian.  The most recent was by ICM for the Guardian yesterday (6 May).  Whilst the vote intention question showed support evenly divided between Labour and Conservatives on 35 per cent each, the prediction question suggested a three point victory margin for the Conservatives on 35 per cent versus Labour on 32 per cent.     

The wisdom polling asks the respondent to predict the next government or Prime Minster.  The problem with national 'wisdom' polling, as with the mechanism of Uniform National Swing that translates vote shares into seat shares, is it may obscure critical patterns in local support at the constituency level that have developed as our two party system has fragmented.  To garner information from the local publics on this key local information, more promising are 'local MP' questions, with the hope of tapping information held by the public about their own constituency campaigns, such as the amount of activity being expended by the candidates and other rumours and political gossip, currently locked away in local networks.   

At this level, polling companies  are becoming promising conduits for the expression of mass-crowd wisdom, via their burgeoning online panels.  These are growing at such a rate, it is not difficult to envisage that in 2020 we will be able to properly test sufficient sample sizes of voters in every constituency to measure their own understanding of their own local results.  This is a hugely exciting development.  For now, hardly noticed for his presentation beyond the walls of the LSE conference, we have the research of Andreas Murr, and his pioneering 2015 constituency crowd wisdom model.  Whilst calling on only 25 voters per constituency this time around (in February), his analysis generated an estimate of 292 Conservative seats.  This prediction is remarkable for its closeness to the current betting market.  What is more, with ever growing online panels facilitating larger surveys at future elections, the accuracy of this type of crowd sourcing can only increase.  If the total of Tory seats tomorrow morning hits the 292 total (or more), then the purest and most localised form of crowd wisdom will have scored an early and notable success, not least for being higher on Conservative seats than eleven of the twelve academic models.  This may be at the start of a revolution in how we forecast not just elections but referendums and non-political events too.  In the age of the Internet and massive online opinion poll panels, we can start trusting the public more than the experts.