I started my this website last October. I created a forecast for the Senate elections, in which it was very accurate. I missed two races, Indiana, which I had as a tossup, and Florida. It had a Brier score of .04843. I then made a few tweaks to the model, but kept the car the same, just adjusting a few tweaks under the hood. I then implemented this model into a presidential model for 2020. It has gotten a little bit of traction and following, but how well will my model translate from a senate forecast to a presidential forecast?

Well the best way to test this is to back test this. I gathered the data I needed from the 2016 and implemented it into my model. I did not change my model at all beside the data. It actually surprised me how well it did. With very little back testing before besides the 2018 senate results, I created “the most accurate forecast” of 2016. I use those parenthesis for a reason. I am a little hesitant to use that title for my model as it was created this year and not in 2016. But what I am getting at is how well the model was built, withstanding itself to 12000 polls not knowing how well it held up. Below is a little overview of the model.

Screen Shot 2019-07-17 at 5.00.15 PM

(Shouldn’t be keep the white house, should be win)

Screen Shot 2019-07-17 at 5.00.33 PM

It overall was much more bearish on Clinton’s win chance across the board. However it still favored Clinton winning the election. One thing I think my model was good at was getting the separation of the popular vote and electoral vote in its simulation.  It gave Trump a 2/3 chance of losing the popular vote if he wins the Presidency. It missed the popular vote by 1.5%, and average a 5.0% miss on margins. It preformed around average on battleground states. Below is a chart of how well it did in states not considered solid for Trump or Clinton.Screen Shot 2019-07-17 at 5.11.53 PM

The last part I am going to get into is comparing it to the other models in 2016. So how I am doing this is using a Brier Score. It treats events a 1 or a 0. It gives props to when confidence is used correctly, but when wrong hurts a lot. The lower the better. The formula is probability minus observed squared. I got these numbers from a huffington post article. Below is a chart of forecasters accuracy.Screen Shot 2019-07-17 at 5.36.08 PM

It predicted the most races and had the lowest Brier Score in 2016. I am still cautious on calling my model the most accurate, but it is up there with the best imo. I hoped this convinced you to keep up with my model as 2020 evolves. Roll Tide!