2016 Postmortem
Related: About this forumSerious Question re: Nate Silver
What exactly does he base his numbers on?? Especially the % chance of a certain candidate winning a particular state?
Thekaspervote
(32,762 posts)Shivering Jemmy
(900 posts)All feed into a Monte Carlo simulation of an election, they increase (or decrease) probability of an outcome in simulation. Monte Carlo simulation run hundreds of times. Probabilities based on number of times an outcome happens.
Ztolkins
(429 posts)DemocratSinceBirth
(99,710 posts)Ztolkins
(429 posts)I know because I've seen it all over the MSM lately...Morning Joe is gospel.
abumbyanyothername
(2,711 posts)likely explains it all, although I haven't read it yet.
RomneyLies
(3,333 posts)He takes into account all polls. He weights polls based upon bias and past accuracy. He weights them based upon number of people polled and how the likely voter model is built by each pollster.
To this he add additional data such ans financial reporting, GDP, and unemployment.
the precise mechanisms within his model are, of course, a trade secret and will never be completely revealed because if he revealed his model, anybody could duplicate it.
november3rd
(1,113 posts)He put over thirty-five years of polling and economic data into his database and uses it to model predictions in current races bases on current polls and economic data.
Demsrule86
(68,556 posts)Statistical analysis of polls and other things as well.
PsychProfessor
(204 posts)This is basically a statistical equation. Imagine something like this:
B1*X + B2*Y +B3*Z = the election outcome
Those B's are weights that are used to determine how much each of the variables, X, Y, and Z contribute to the prediction of that outcome. The X's, Y's and Z's are predictor variables, these might might be economic data or they might be different national or state polls. And there are, I am sure, a whole bunch of these. The weights are determined by the how each of these predictors has contributed to accurate predictions in the past.
So, Nate devised a complex equation that takes into account as many relevant predictor variables as possible and fed them into past elections to see how they did in predicting those outcomes. Doing this, you can then devise a basic equation that takes whatever data you have and spits out a prediction. Nate's regression model is clearly more complex than this, including parameters for things like proximity to election day.
Then, (I think) he performs a whole bunch of simulations. This means that the data are then run a bajillion times (not to get to technical ) and you see how often the circumstances you have produce each of the outcomes you are interested in. So, when Nate says that Obama has, say, a 75% chance of winning he is saying that in those many many simulations, an Obama victory came out 75% of the time. What is important to keep in mind is that Nate's model is a forecasting model. It is for making predictions. It is not about fitting data to the past, but using past data and current data to make a prediction. (as compared to that U of Colorado model). Also, the model itself was constructed a priori, that is, BEFORE the polls started coming in. He is stuck with model he has and as you can tell when he talks about it, he talks about what it does, not what he is doing to it.
At the state level, it is the same idea. Given the local circumstances, the polls, the history of that state etc. a model is built that predicts the state going for either candidate. The simulations tell us, given the data we have, what percentage of the time that state goes in either direction. (I am pretty sure these are also based on simulations...)
I hope this helps!