
His being a mathematician may (or may not) mean the math has a higher possibility of being correct, but it says little to nothing of the statistical validity of the conclusion being offered. A+B may equal C, but if the actual equation should have been D+E=X, then the conclusion offered is not valid. Or to put it as Skinner put it, if the assumptions aren't valid, then neither are the conclusions.
I'm a little late to this party, but I am quite glad to have found that it took place. I have no personal issue with TIA or those who followed and have followed his work. I will offer, however, that when it was being presented in the month's leading up to the 2004 election, I was troubled by the methodology being used. I was further troubled by the fact that the methodology occasionally changed in subtle ways. That did not suggest to me dishonesty, per se, rather the possibility that TIA had become so captivated by the numbers themselves that he had not thought to ask himself whether the assumptions on which they were based meant anything.
Not being a statistician myself, but having done many studies using statistical breakdowns of voter behavior for historical research, something just didn't strike me as quite right about it all, so I forwarded several of the presentations to a friend who himself has a PhD in mathematics, is an occasional historian, taught for years, has written advanced mathematics textbooks, is very liberal in his politics, and currently works as an editor of a mathematics journal. I thought he might be able to explain what I was seeing in such a way so as either to remove my doubts or confirm them, but the answer he gave me was not at all what I expected.
I still have the email, which was somewhat lengthy, but ended with this final conclusion.
"I am not qualified to judge."
"Why?" I asked him. He then explained to me the difference between doing math and doing statistics, using terms that meant more to PhDs than they do to me, but good enough terms I gathered the gist. The math in statistics is relatively easy to compute. The equations themselves are fairly well static and vetted. The numbers used in those equations, however, are the key to the validity of the conclusion, and the job of a statistician is in finding the right numbers to use. He did opine, tentatively, that a severe problem existed in TIA's method of combining and averaging and then comparing polls, which was a part of the methodology that initially had caused me pause as well. In the case under discussion here, the problem is glaring.
My friend then forwarded the presentation to a colleague whose life is statistical analysis. His conclusion? I'm paraphrasing here, but it boiled down to something similar to what Skinner said regarding this presentation, "The underlying assumptions are false. This is a case of a conclusion being sought and the methods and assumptions fixed to surround what is sought. This is the kind of thing that makes people distrust statistical analysis." And all this took place well before the election.
Since then I have paid little attention to any of this, except to offer my own little bit here, which I'm sure will draw the ire of many. The point for me is simply this. The worthy goal of exposing election fraud is not aided by bad statistical analysis and in fact works exactly in the opposite direction.
