Election 2016: The (Un)Reality of Predicting Voting Behavior

If you hear the gnashing of teeth and the tearing of hair, it may be a group of disgruntled voters whose candidate lost the election, but you might want to consider that it’s the expression of consternation among those of us in the business of predicting what people are going to do.

Election 2016 is now over, and the questioning has started.

Is data dead? Is polling obsolete? How could Nate Silver be wrong?

Shapiro+Raj’s Election 2016 Rapid Research Project was not built to forecast the election results, but during the last three months, we’ve been using our social media platform to survey likely voters on Twitter who were most active in political conversations. Using our Adaptive Listening™ technology and our Rapid Research Survey tools, we asked likely voters in the Twitterverse some traditional polling questions along with compelling projective questions to uncover the emotional drivers of voter partisanship.

In previous posts, we asked “Will Americans elect a Super Hero or a CEO?”, in an effort to understand the archetypes and emotional ties that bind supporters to the brands of Clinton and Trump. In October, “The Fears that Haunt Us and Election 2016”explored fear as the great motivator of voter engagement.

In our last post, “Even if Trump Loses, He May Win,” we pointed out that our Rapid Research survey gave clear indication that Donald Trump supporters were clearly voting for Trump rather than against Hillary Clinton. The emotional intensity of the responses we received from Trump supporters revealed that the unconventional candidate sold an emotionally resonant story that energized a segment of the electorate that was left behind by the economic recovery. Yet, we still expected Clinton to win.

Oh, and by the way, the quantitative data we collected in each of the three surveys indicated that Trump would win.

Yep. We didn’t miss it. We just didn’t believe it was real.

Oh, we had all kinds of legitimate reasons for discounting our findings.

Our research wasn’t intended to predict the outcome or to even provide generalizable results. We were more interested in the qualitative insights around candidate narratives and the hopes, fears and expectations of social media influencers and hand raisers.

Yet, when we looked back at the data from all three surveys, there it was. It wasn’t a fluke. The quantitative data in all three surveys leading up the election showed what most election 2016 polls and models did not. Trump was likely to win.

Why didn’t we take it seriously? It’s the weird reality of data analytics.

Experts in behavioral economics would suggest we experienced a miscalibration of subjective probabilities. We suffered from the same overconfidence bias that many of our friends in the world of data science and market research did. We just knew that Clinton was going to win. At the very least, we exhibited social norm bias in that we assumed, like everyone else, the majority of the models must be right.

We also make no representations that our research was intended or designed to be predictive. Yet, in today’s world, social media discourse plays a critical role in how we influence and are influenced, and in the era of the Internet of Things, the discourse of social media influencers and hand raisers may have more predictive power than we data science geeks would like to believe.

In a guest post I wrote for AllAnalytics.com, I argued that historical, quantitative data wasn’t always predictive, as in the case of the models built with polling data. Yet, qualitative data of the kind we collected in our project isn’t always precise, even if you have some quantitative elements included. Yet, the power of combining hard data with the qualitative intelligence, especially that collected from focused engagement in social media discourse, can yield a more precise, more correct answer in an environment of uncertainty.

In focusing on engaged listening to social media we gathered quantitative and qualitative data that offered intelligence that many of the models forecasting the election could have benefited from.

The Election 2016 Rapid Research is intended to listen to and engage with people in an effort to understand sentiment and provide intelligence that at strongly indicates future behavior. Our own incredulity aside, it shows that this type of intelligence can and should be integrated into predictive models to provide better, more correct, results.

Well, there’s always 2020. Ready or not, here we come!


By Lauren Tucker, SVP Strategy, Research and Analytics