For Second Consecutive Presidential Election, FiveThirtyEight Lags Simple Average of Polls

Dane Van Domelen
2 min readDec 31, 2020

The bull case for Nate Silver is getting harder to make

The madman!

I guess the main reason Nate Silver is a household name today is that he was thought to have made exceptionally good predictions in the 2008 and 2012 presidential elections. He got 49/50 states right in 2008 and all 50 in 2012.

While many marveled at his predictions, others pointed out that it’s not as hard as you might think to run the table. For example, RealClearPolitics, which uses a simple average of state polls, only missed Florida in 2012.

Regardless, Nate Silver got rich and famous, and he continues to make election forecasts on FiveThirtyEight which are probably more widely tracked and discussed than any other.

But are his predictions any good? Not recently.

Methods and Data Sources

I think the simplest way to assess FiveThirtyEight’s predictions is to look at predicted vs. actual margins in battleground states. RealClearPolitics is a natural benchmark to compare against. At the very least, Nate Silver’s complicated proprietary algorithm ought to outperform a simple average of state polls.

I take predicted margins from the respective websites (FiveThirtyEight: 2020, 2016; RealClearPolitics: 2020, 2016), actual margins from The New York Times (2020, 2016) and battleground state lists from Ballotpedia (2020, 2016).

2020

Table 1 summarizes the predictive performance of FiveThirtyEight vs. RealClearPolitics in the 2020 battleground states. Both sites got 11/13 states right, but FiveThirtyEight was more biased against Trump on average (4.3% vs. 2.1%). FiveThirtyEight’s prediction was better than RealClearPolitics’s for only 3/13 states.

Table 1. FiveThirtyEight vs. RealClearPolitics in 2020.

2016

Table 2 shows the same comparison for 2016. FiveThirtyEight got one fewer state correct than RealClearPolitics (7/12 vs. 8/12), was again more biased against Trump (3.0% vs. 1.9%), and again only beat RealClearPolitics in a minority of states (4/12).

Table 2. FiveThirtyEight vs. RealClearPolitics in 2016.

Conclusion

FiveThirtyEight’s blackbox algorithm for predicting presidential elections appears to be no better than a simple average of polls. If anything, all the filtering and processing that goes on under the hood serves to inject additional bias.

--

--