3 reason why elections are hard to predict for humans and machines

What if a model said no one will win the election? That model would be rejected immediately, but what if it wasn’t wrong?  What if the learning picked up a signal from historic data that knew the election would be contested and no one wins?  That’s not an unprecedented outcome when a “hanging chad” is a known feature. Perhaps the model predicts the US courts will decide. Plausible. Probably unlikely.  Someone always wins eventually so what’s the ground truth – was the model right or wrong? 

Perhaps the wrong question was asked; are election outcomes really binomial?  Inherently with more than two political parties running, the results aren’t binary- they win or not. Elections can be  truly a multinomial classification; however, we rarely treat elections as multinomial, but rather usually binomial.  Right Mr. Nader?  Right Mrs. Stein?  Thousands of Markov chain simulations are cool on a random walk to the polls but there’s always the drunk leaning on the lamppost. 

What causes bad predictions and why is it difficult for not only humans but machines to predict the outcome?  So many reasons beyond the scope of this blogivation, which we will tell you in Squark Symmetry, but here are a few to get started:

  • Bad data.  People lie.  Citizens do it to pollsters who inform politicians who lie to citizens about the results; it’s an unvirtous cycle, for sure.  A strong empirical indicator that the current US President will win again is when you ask Americans who they will vote for they say Biden, but when you ask the same American who their neighbors will vote for, they say Trump.  This of course varies by geography, but it’s the same trend seen in 2016. 

    I’m told people are cautious to announce their support for Trump or Biden, and don’t want to disclose that they align with a particular politician.  The reasons are as myriad as snowflakes.  With the models ONLY being as good as the data used, predicting election outcomes – within unprecedented times with viral externalities and the potential for foreign interference  –  means new features harder to engineer.  Feature named “Pre-Covid” yes or no may become the “new normal” column for machine learning. 

  • Skewed data.  It is well known candidates increase their positive polling trends directly after a political convention. Post-convention bounces regress to the mean (or to the moon) but still hold perhaps some predictive power.  But are these candidate bounces just dead cat bounces? When trying to predict November are post-convention polling increases mostly meaningless earlier in the year?  Seems that way. The polling data can be skewed by bias around the convention based on who was sampled in the data frame. Coverage error, selection bias, and so on.

  • Overfitting. Which means the results of the analysis match too closely to the data the machine trained on, and thus fails to fit new data in a reliable way.  It’s a problem in prediction, and it’s not always easy to work around.  One way to do it is to early stop models when they begin to degrade and get erroneous. Other methods exist too, but these are complex math that aren’t always obvious or even known to data people. It helps when someone has written the code to do this work for you, like Squark.

There are so many other reasons why election probabilities are just that, probabilities, that may or may not happen.  People fail to clean data; they overclean data, they don’t use enough observations.  Sometimes analysts, despite best intentions, just don’t know what they don’t know. Recently on LinkedIn I saw some “expert” with an agency produce an inflammatory Covid related correlation analysis, and the guy didn’t even make the data stationary.  Doh!

The cool thing about Squark is we know this and a lot more too.  When we invented no code predictive analytics, we created a powerful entire automation, with deep tech sub-automations built it, that help reduce and work around the programmatically solvable issues above and many more. 

We started with a vision to democratize prediction for business users by putting an advanced AI capability into their hands, without coding.  We’ve done it now, and we want it to be affordable and powerful, so everyone can use it, even politicians. That is, if we would actually sell it to them.  But like Jack Nicholas said in that Tom Cruise movie, perhaps they “can’t handle the truth!” 

Please don’t forget to vote wherever you are, whenever you can!

Recent Posts

Data Science for the Metaverse
Time Series Forecasting
Do More with the Benefits of AutoML
Predictive Analytics Use Cases for DTC
Buy-In for a No-Code Predictive Analytics Solution

Squark no-code AI SaaS platform is the most flexible, fastest, and easy-to-use automation of AI on the planet.  Use clicks (not code) to automatically find AI-insights in the data you already have. Prioritize decisions and know what actions to take using any data from any system. Drive growth and expansion by forecasting customer outcomes over time, understanding attribution & media mix, predicting lifetime value, and improving retention .  Squark’s no-code AI is as simple and easy-to-use as a spreadsheet. It’s built for analysts yet our award-winning software is more powerful, accurate, scalable, and feature-rich than many other AI systems deployed in business today.  

Copyright © 2021 Squark. All Rights Reserved | Privacy Policy