Just checked Sam Wang:

http://synapse.princeton.edu/~sam/pollcalc.html Looks like he did indeed commit himself to a 98% probability of a Kerry EV win, bless him (he doesn't seem to give a probability for his Popular Vote estimate)

And it's still on a wing and prayer, though. He modified his "gut estimate" to 6:1 after a Bayesian adjustment as to whether his assumptions were likely to be correct.

Well, there you go. Sam Wang and TIA agreed, at least on the EV win.

I'm afraid it doesn't convince me that either of them were correct.

Actually, Sam Wang doesn't seem that convinced he was correct either:

http://synapse.princeton.edu/~sam/pollcalc_letters_afte... but he has interesting things to say about why he might have been wrong:

I think the purely statistical aspects of the analysis did extremely well. The electoral outcome looks like it will be close to the decided-voter outcome predicted by the polls. Victory margins are quantitatively close to the pre-election polls: out of 23 battlegrounds, the direction of the outcome was predicted in 22 (the exception was Wisconsin, where the polling margin was 0.4% for Bush and the actual margin was about 0.4% for Kerry). Quantitatively, 12 victory margins were within one standard error and 17 were within the 95% confidence interval. Not perfect, but not bad.

The most significant errors had to do the net effect of other factors not encompassed by polls. To make a final prediction, I used previous patterns of uncommitted voters breaking for the challenger as a guide, but this break either did not occur or was cancelled by other factors. My assumption of high turnout was flat out wrong! In the end, the likely-voter models of pollsters were not too far off.

There has been talk of other factors, but a parsimonious explanation may be that the net effect of all other factors was zero. This isn't always true - in past years the outcome seems to have not matched final polls. There seems to be some mystery offset that varies a bit. On the other hand, this year we had more data - maybe it's just a question of having enough data and the right answer falls out.

One advantage of rigorous statistical modeling is that you can see **a clear separation between factual information and assumptions of less certainty**. In this case my baseline calculation was quite accurate, but the intangibles were wrong. As I said, in previous years at least one of the assumptions would have worked. What happened this year is a question for the political and policy people - in the end it goes to show that I am at my best with the numbers!

(my bold)

See also:

http://election.princeton.edu /