Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Reluctant Hamas Responders? (was: Shy Tories?)

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Election Reform Donate to DU
 
foo_bar Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 04:26 AM
Original message
Reluctant Hamas Responders? (was: Shy Tories?)
Edited on Thu Jan-26-06 04:32 AM by foo_bar
Wednesday:

An exit poll broadcast by Israel's Channel 2 TV showed Fatah getting 43 percent to 32 percent for Hamas.

http://www.mantecabulletin.com/articles/2006/01/25/ap/headlines/d8fbtbig6.txt

Islamic militant group Hamas was forecast to have won 42 percent of the vote in Wednesday's Palestinian election, just behind 45 percent for President Mahmoud Abbas's Fatah, a new exit poll showed.

The exit poll by an-Najah University in the occupied West Bank city of Nablus showed Hamas closing the gap with Fatah. Earlier exit polls forecast Hamas winning over 30 percent of the vote, compared to more than 40 percent for Fatah.

http://today.reuters.com/news/NewsArticle.aspx?type=topNews&storyID=2006-01-25T200104Z_01_L20602990_RTRUKOC_0_US-MIDEAST.xml

An exit poll by Bir Zeit University in Ramallah, showed Fatah winning 63 seats in the 132 member parliament with 46.4% of the vote and Hamas taking 58 with 39.5 per cent.

http://news.scotsman.com/latest.cfm?id=127422006

So we've got Fatah +11%, Fatah +6.9%, and Fatah +3% according to three sets of exit polls (edit: the +11% TV one seems to be a telephone poll, not a straight up exit poll). They can't all be right, so we turn to our crystal ball:



Officials in the ruling Fatah Party said Thursday that Hamas captured a majority of seats in Palestinian legislative elections, shortly after the militant group claimed victory.

The Fatah officials, speaking on condition of anonymity, said they expected Hamas to win about 70 seats, which would give the Islamists a majority in the 132-seat parliament. They spoke on condition of anonymity because counting in some districts was continuing.

http://abcnews.go.com/International/wireStory?id=1542931

Exit polls released after voting ended late on Wednesday had forecast that while Hamas would deprive Fatah of the clear majority that it has enjoyed since the parliament was first elected a decade ago, it would still only come second.

By Thursday, however, Fatah was conceding that the result would be even worse.

"Hamas has beaten Fatah in the elections," said one senior official, who stood for election to the Ramallah-based parliament.

"They have won more seats than us in the legislative council," another candidate, who was a senior member of the Fatah campaign, said.

http://sify.com/news/fullstory.php?id=14127906

Could all three exit polls be wrong (outside the MoE)? Flame on. :nuke:
Printer Friendly | Permalink |  | Top
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 06:02 AM
Response to Original message
1. Is there a history of shy Hamas in exit polls? Is this a change in the
Edited on Thu Jan-26-06 06:04 AM by papau
model? Is this in any way comparable to the model change in the US that defied logic and did not correlate with ethnic and specific area polls of subgroups in the US exit poll?

Does the math that proves election theft in the US even apply to this election given the lack of data from prior Hamas voting records that would be needed to establish a model?

Could this be a best effort at the first Hamas exit poll and have we indeed discovered the very valid "Shy Tories" adjustment was not adequate for the Hamas exit poll strat. sample model?

Was there a change in the "Shy Tories" adjustment that would imply election theft?
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 06:57 AM
Response to Reply #1
2. "a history of shy Hamas in exit polls?"?
Well, we can't very well look back to January 2005, since Hamas boycotted that election. We can look back to pre-election polls. PSR had Fateh running ahead at the national level, with Hamas closing fast, and the two running essentially even in district elections.
http://www.pcpsr.org/survey/polls/2005/preelectionsdec05.html
http://www.pcpsr.org/survey/polls/2006/preelectionsjan06.html

I think analogies to U.S. exit polls would be flawed in many respects. But as far as I can tell at this point, there is no reason to think that the Palestinian vote count is radically wrong. So, in a fair world and subject to further review), this result would stick a fork in the meme that 'exit polls are so accurate, they are used around the world to verify the accuracy of elections.' Actually Ukraine should have done that, since the exit polls there differed substantially and were subject to serious methodological critiques.

There is no "math that proves election theft in the US," or if there is, the professionals remain unaccountably oblivious to it. See for instance http://elections.ssrc.org/research/ExitPollReport031005.pdf (no, expert opinion hasn't visibly shifted since then). Asserting fringe beliefs as factual, without substantiation, doesn't seem very constructive to me.

I can't tell yet, but I have to assume that indeed, the Palestinian projections were not based in part on prior returns, whereas the U.S. projections were. This point verges on moot, because most of the people who have done "math" on the U.S. exit polls actually disregard the fact that the estimates incorporate prior returns. The Palestinian results seem to be well outside nominal margins of error under any assumptions.

What 'Shy Tories' adjustment are you talking about?
Printer Friendly | Permalink |  | Top
 
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 08:45 AM
Response to Reply #2
4. the professionals include whom? Do I and those with similar math
Edited on Thu Jan-26-06 08:50 AM by papau
backgrounds on DU count - indeed the exit poll research folks - including those published in peer review and in the media re last election are here on DU. Or is a ssrc blessing required?

Sorry but http://elections.ssrc.org/research/ExitPollReport031005.pdf well written as it is and as accurate as it is does not mean squat since it it simply accepts the Mitofsky analysis that has been shown to be based on bad logic and does not reflect the release last week of a real analysis of the precinct level data with bias analysis.

The Palestinian results are not outside nominal margins of error if the model is just wrong - which is the 'Shy Tories' problem. As to 'Shy Tories' , the literature has a great deal of discussion of the error in the model in Great Britain that underestimated the Tory vote in an election.

"actually disregard the fact that the estimates incorporate prior returns" in model building is new to me, but then I am rather old and have not done any paying work in this are for decades.

the meme that 'exit polls are so accurate, they are used around the world to verify the accuracy of elections.' seems well and healthy - unless our CIA gets involved as they did in Venezuela - as to the Ukraine my sources have not said we screwed with their exit polls.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 09:15 AM
Response to Reply #4
6. no blessing is required, but
Edited on Thu Jan-26-06 09:59 AM by OnTheOtherHand
in terms of the professional discourse -- which is not so much what occurs here on DU -- the belief remains fringe. Whether or not it is true, to assert that it has been proven may give readers a grossly distorted view of the debate. Cold fusion may have been "proven" too, but it remains irrelevant in mainstream physics.

If you want to either of the reports out of NEDA last week, I am happy to discuss them with you. No political scientist has signed on to either one, as far as I know. Somewhere there must be a political scientist who more or less agrees with one of them. It is, as a matter of logic, perfectly possible that Dopp and Baiman are right and the political science profession at large is wrong. I remain emphatically unconvinced. More to the point, my colleagues remain unconvinced; I am simply one of the very few who is willing to take the time to explain why.

I cannot tell who you mean by "the exit poll research folks," but I don't know of anyone who has actually conducted exit poll research who has supported the view that the election was stolen.

How do you believe that shy Tories are or might be incorporated into exit poll models?

I cannot tell what your final point is. If the Palestinian exit polls showed Fatah winning beyond the margin of error, and the actual returns show Hamas winning, and the actual returns are essentially correct, does it really make any sense to assert that exit polls are sufficiently accurate to verify the accuracy of elections? Surely we would at least have to add some conditions.

(EDIT to remove unintentionally contentious language)
Printer Friendly | Permalink |  | Top
 
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 09:59 AM
Response to Reply #6
10. "fringe" - an interesting term - when plates came out in the 60's it was
fringe and then all the old farts died off and now it is a given.

Likewise "cold fusion" has not gone away - and indeed outside the US - as in Japan - serious peer reviewed work goes on - albeit not under the "cold fusion" label.

In any case let's discuss the NEDA - why has no political scientist has signed on to either one?

Is the status quo so powerful - or is there a real reason?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 10:34 AM
Response to Reply #10
12. Here a formal version
of my critique of the NEDA smoking gun paper:

Re: The Gun is Smoking: 2004 Ohio Precinct-level Exit Poll Data Show Virtually Irrefutable Evidence of Vote Miscount

I consider that there are major methodological problems with this paper, and that it needs to be withdrawn until it is radically revised. I will list them below. However, before I do so, I will say that I do not dissent from the inference that the WPD in Ohio was not due to chance. A simple one-sample t test, or even a chi square, on the ESI data, is sufficient to rule this out. I will also say that what follows is in no way an argument that there was no vote corruption or electoral injustice in Ohio. I believe that there was. I do not believe this paper provides anything like "virtually irrefutable" evidence.

So, to continue:

1. The paper attempts to show that the WPD in certain individual precincts is outside the MoE for that precinct. While potentially a worthwhile exercise, the paper does not succed in demonstrating this (although it is nonetheless likely to be true), as the calculation depends on the sample size for each precinct. The authors attempt to determine this by matching the Roper samples to the ESI precincts. In doing so, they assume that the ESI samples are exactly double the size of the Roper samples. This leads to two problems. Firstly, if the ESI samples were double the size of the Roper samples, in many cases sampling error (hypergeometric) would lead to greater discrepancies in vote share between the ESI and Roper data than those apparently assumed by the matching algorithm. But secondly, there is no reason to suppose that each ESI sample is double the size of the Roper sample in any case. Overall, as we know from the E-M report, the subsample was rather over half of the total sample (it would appear that it was intended to be half, as the Roper sample Ns peak sharply at 50, and I believe the intended total sample size was 100; however, from the E-M report we know it was nearer 80). This means that in many cases, especially where the Roper sample is small, the ESI sample may also be low. The MoE for many precincts may therefore have been underestimated - more importantly, there is no way of concluding whether the MoE has been correctly determined or not, so no conclusion can be drawn.

2. A number of plots are given in which a "pattern" of negative correlation is shown between WPE and Kerry's exit poll share. This is quite invalid, as the plots portray the same error term on both axes. Any error in WPD will be reflected in exit poll share directly, even if this is confined to sampling error. The strength of the correlation will reflect the amount of variance in WPD due to any sampling or non-sampling error in the poll, and therefore cannot tell us anything else. No "pattern" of vote-corruption can possibly be inferred from such a correlation.

3. The plots are shown as bar-charts. No correlation coefficients are given, but a regression line is shown, and readers are invited to observe the "pattern". As the pattern we are invited to observe is a linear correlation, it is quite invalid to arbitrarily average the WPDs of precincts that happen to share a value on the X axis. This will happen fairly frequently in the data set because the ESI "blurring" procedure involved allocating precincts to a "band" of precincts with similar vote-share, and substituting the mean of that band for the actual vote share. Moreover it is quite unnecessary: if the scatterplot function in Excel is used rather than the bar chart function, no pooling will be necessary. Scatterplots are the standard way of portraying a correlation graphically, and indeed, a correlation is computed precisely by computing the best fit line through the scatter and analysing the residuals. It is not computed by computing the best fit line through a bar chart in which some data points have been arbitrarily merged. This procedure will falsify the relationship between X and Y variables, and of course reduce the residual variance.

4. The paper invites us to observe that the "pattern" of WPD plotted against a) vote share and b) exit poll share is typical of the pattern produced by fraud, and references Dopp's recent paper: Vote Miscounts or Exit Poll Error? New Mathematical Function for Analyzing Exit Poll Discrepancy. I have demonstrated why (b) is invalid. However (a) is also problematic. Dopp models a fraud scenario in which a fixed proportion of votes Kerry votes are switched to Bush. Clearly, in such a scenario, the WPD will be greater in precincts with a larger proportion of true Kerry votes. If ALL precincts are so corrupted, therefore, the correlation between WPD and Kerry's vote-share will be negative (more WPD at the Kerry end); however, precincts will also be bodily shifted Bushwards. If, therefore, only a subset of precincts are so corrupted, the negative correlation induced by the vote corruptioin will tend to be cancelled out by the unshifted, uncorrupted precincts, and, if sufficiently few precincts are corrupted, may actually generate a positive correlation. The sign of the correlation coefficient does not, therefore, tell you whether there was vote-corruption; it would merely tell you, provided you knew that it had occurred, whether it occurred in few precincts or many. An insignificant slope might mean the two effects happened to cancel out; or, alternatively, that there was no vote-count corruption. In fact, the correlation in Ohio is not significantly different from zero, and, although it trends that way, this is likely to be an artefact of the assymmetrical function of WPD expressed in terms of vote share (see equation in my paper here: http://www.geocities.com/lizzielid/WPEpaper.pdf), or, indeed, as it is not statistically significant, to variance in the data accounted by factors orthogonal to vote-share.

5. The paper also reports analyses in which a value corresponding to the mean of the national WPD is subtracted from each precinct WPD, in an apparent attempt to demonstrate residual variance in WPD from the mean. There is absolutely no reason to postulate that there was no between-precinct variance in the non-response/selection bias postulated by E-M as an explanation for the discrepancy. Indeed much of the E-M report is devoted to analysis of this variance. There is also no reason to suppose that there was no between-state variance, and indeed the E-M report addresses this in at least one analysis, IIRC (the swing state analysis), and also gives state mean WPEs which are patently non-uniform. And of course, the plots resulting from these analyses are subject to exactly the same problems as those outlined above.

6. The paper posits only two alternatives to vote-corruption as explanation for the overall discrepancy (which is undeniably significant): lying Bush voters and non-response bias. It completely overlooks selection bias, for which there is substantial evidence; indeed, the E-M report suggests that it was likely to be a major factor.

In short, the only valid inference made in the paper seems to be that the exit poll discrepancy in Ohio was not due to chance. I am not aware that there is even any debate over this; what we want to know, and what the paper claims to demonstrate, "virtually irrefutably", is that that discrepancy was due to vote-corruption rather than any bias in the poll.

I believe I have demonstrated that the paper as it stands does no such thing, and I consider that it should be withdrawn, and only reissued when its errors are corrected, and supportable inferences made.

Elizabeth Liddle



(It's "one I did earlier" as they say on the cookery shows.)
Printer Friendly | Permalink |  | Top
 
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 12:03 PM
Response to Reply #12
13. Excellent stuff - but heavy going for old heads! I had not read this
before.

Thanks, Lizzie, for posting it.

The election fraud MoE problem (Roper size) is obviously true given the method - but I found the method reasonable (actuaries live and die on what is "reasonable" so this may be an artifact of my career).

Your statement that WPD will be reflected in exit poll share directly, with the strength of the correlation reflecting the amount of variance in WPD due to any sampling or non-sampling error in the poll, and that discrepancy was not due to chance, whatever the MoE on the individual precincts is fine. But the http://www.exit-poll.net/election-night/uationJan192005.pdf use of WPE as the measure of precinct-level discrepancy does not "feel" right to me.

I just want Edison/Mitofsky/NEP media clients to release the information on the exact sample sizes, type of voting system, locations of precinct, and other exit poll factors to allow investigation or independent analysis.

As Ron says "The patterns are striking (especially in the graph ordered by Kerry exit poll - before "the shift" - if there was one). Very large Kerry discrepancies throughout the sample and much smaller Bush discrepancies at only one end of the sample. As you know pervasive Kerry bias would produce an "inverted U" pattern of Kerry discrepancies - larger in less partisan districts and smaller in more competitive districts would". Ron also notes that the graph on p. 13 shows:a) Pervasive large pro-Kerry discrepancies across the sample; b) NO "INVERTED U" PATTERN THAT WOULD BE INDICATIVE OF "PERVASIVE PRO-KERRY" EXIT POLL RESPONSE BIAS;c) No pattern of random pro-Kerry and pro-Bush discrepancies that would indicate random (non-sampling) exit-poll error;d) An unexplained pattern of mostly small (but a couple of large) pro-Bush discrepancies that are concentrated on the right side (high Kerry precinct) side of the sample; and e) NO PRO-BUSH DISCREPANCIES IN PRECINCTS WITH LESS THAN A 43% OR SO KERRY OFFICIAL VOTE. No "exit poll error" explanation has been offered for these very striking patterns (that are consistent with WPD trends in the national data)."

I do not buy that the Ohio data has been "confirmed" as having a "lot of noise" and that that explains anything.

OnTheOtherHand's reference to Mebane and Herron is a reference to yet another paper I have not read. As I said 90% of my opinion is based on what I suspect - and little else.

But then I come back to why has the basic data not been released?

If the data is ever released and we conclude the exit polling does not or can not show the fraud, as in there is "absolutely nothing that even starts to distinguish between fraud and polling bias", perhaps a paper trail and audits are the way to go in the future. Sort of like - exactly like - the way Venezuela ran their election.


Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 12:55 PM
Response to Reply #13
15. I absolutely agree
regarding WPE as a measure. It was why I posted my paper on the problems of the WPE, and why, in the end, Mitofsky reanalysed the data.

I'm inclined to forgive Baiman and Dopp for not getting the MoEs right, because the fact is that they don't have the sample sizes, and though they made a brave attempt to divine them, their assumptions were wrong. We can perhaps talk about why the data is not available later.

And of course "noise" explains nothing. Noise could be obscuring a signal. But the point about noise is that you don't know whether there is a signal or not.

Re Ron's comments:

"The patterns are striking (especially in the graph ordered by Kerry exit poll - before "the shift" - if there was one). Very large Kerry discrepancies throughout the sample and much smaller Bush discrepancies at only one end of the sample.


This is the plot I find completely invalid. Kerry's exit poll share will be determined by many things, and one thing we can be sure of is sampling error. Where sampling error pushes it up, WPE will become more negative. Where sampling error pushes it down, WPE will become more positive. So even sampling error alone would tend produce a negative correlation between Kerry's exit poll share and WPE. Similarly, any bias in the poll that pushes Kerry's exit share up, will push WPD in a more negative direction, and vice versa. So I do not see any way in which this plot can be interpreted to say anything about vote miscounts. Indeed, Ron rather retreats from this plot in later postings ;)

As you know pervasive Kerry bias would produce an "inverted U" pattern of Kerry discrepancies - larger in less partisan districts and smaller in more competitive districts would". Ron also notes that the graph on p. 13 shows:a) Pervasive large pro-Kerry discrepancies across the sample; b) NO "INVERTED U" PATTERN THAT WOULD BE INDICATIVE OF "PERVASIVE PRO-KERRY" EXIT POLL RESPONSE BIAS;


Sure, there are pervasive large pro-Kerry discrepancies across the sample. But we know that, because we know there the discrepancy was significantly negative. However you would only get an "inverted U" pattern if response bias was uniform. There is absolutely no reason to suppose it was uniform, and every reason to suppose it was not. The E-M report did "analyses of variance" on the WPEs on the very assumption that there was variance in non-response bias.


c) No pattern of random pro-Kerry and pro-Bush discrepancies that would indicate random (non-sampling) exit-poll error;


Well Ron does not say what this pattern is, and offers no test of its presence in the data.

d) An unexplained pattern of mostly small (but a couple of large) pro-Bush discrepancies that are concentrated on the right side (high Kerry precinct) side of the sample;


Well, there is a problem here, as for some bizarre reason they chose to aggregate precincts that shared a value on the x axis. So it is hard to comment. But again, no "pattern" is offered, as far as I can tell, just a description. At the risk of sarcasm, I could describe the tealeaves nestling at the bottom of my tea-cup right now as "an unexplained clump of mostly small (but a couple of large) tea-leaves that are concentrated at the handle side of the cup".


and e) NO PRO-BUSH DISCREPANCIES IN PRECINCTS WITH LESS THAN A 43% OR SO KERRY OFFICIAL VOTE. No "exit poll error" explanation has been offered for these very striking patterns (that are consistent with WPD trends in the national data)."


I believe this observation is an artefact of the aggregation of precincts with shared x values, but I'd have to check. But I simply fail to see a "very striking pattern" here, and no reference is given for the pattern alleged to be in the "national data", although I suspect I know what he is talking about.

I confess that it is true that I do not think the exit poll evidence is good evidence of fraud. I am, however, willing to be persuaded otherwise by good statistical arguments. So far, none, IMO, have been forthcoming, and I find this papers one of the most flawed pieces of analysis I have read to date. It makes me rather cross, as I think there is some excellent evidence for some abominable instances of electoral injustice in 2004, and possibly outright fraud. And I don't think bad statistical papers help make a good case better.

Sorry!
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 12:43 PM
Response to Reply #10
14. you take my point rather precisely
Edited on Thu Jan-26-06 12:48 PM by OnTheOtherHand
To say that the belief is "fringe" has no direct bearing on whether it is true, nor on whether it leads to intellectually productive research. (EDIT: By "intellectually productive" I don't mean "abstractly interesting to people without lives," although perhaps that too; I mean that by trying to test our suspicions about 2004, we may learn useful things about how to prevent or to detect election fraud in the future. Of course, we already know more about how to prevent election fraud than has been acted upon!)

However, realizing that a belief is fringe, and trying to understand why it is fringe, does help to inform one's contributions to further debate. Some DU participants choose to believe (or write as if they believe) that all honest observers already agree with them. That may be rhetorically effective in some settings, but it tends to lead down dead-end alleys.

(You seem to have a more optimistic view of "cold fusion" than I do, but that is beside the point; I am happy to stipulate that it has not gone away.)

Let me define the belief in question as, roughly, this: the exit polls provide strong evidence of vote miscount in Ohio and/or elsewhere. Nothing sneaky intended by the word "strong"; it seems to be a weaker word than NEDA's recent phrase "virtually irrefutable," but it's deliberately subject to interpretation. But I can't think of a meaning of "strong" that would allow me to agree with the belief.

Fundamentally, the analytical efforts to support this belief seem to share the premise that the exit polls should be assumed free of bias. A wild p value is portrayed as strong evidence of fraud in itself. Mitofsky's and others' statements about non-response and selection bias are perceived and portrayed as unsubstantiated hypotheses, even as desperate improvisations.

But it is hard for me to understand how anyone trained in survey research would ever be inclined to assume that any poll 'should be' bias-free. So there is a gulf in prior probabilities. It's not that political scientists are trained to assume that elections are fraud-free -- far from it -- but those of us who do survey stuff certainly are trained to look askance at survey data. When we encounter people (I'm not referring to you!) who don't understand this, don't want to understand it, and equate it with Vichy collaboration, we tend to move along.

That said, some of us have spent considerable time looking at possible correlates of exit poll 'error' that might support inferences of bias on the one hand or fraud on the other (bearing in mind that the two are not mutually exclusive). I can't think of any correlates that I consider strongly suggestive of fraud (although Steve Freeman has tried), and many fit well with the bias interpretation (e.g., average Within Precinct Error covaries with interviewer age; this seems more likely due to non-response bias than to fraud).
Printer Friendly | Permalink |  | Top
 
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 02:00 PM
Response to Reply #14
16. Thanks for the reply- it nice to know that the real professionals in this
field are taking this seriously.

The impression that this lay person got was that data was locked up because of privacy problems - which I view as nonsense since non-disclose agreements plus a paper that reveals no specific data can certainly be the result of letting a group of outside folks look at the complete data. I also got the impression that there was a resistance to trying to prove fraud.

In any case we all seem more or less in agreement - with me the optimist that thinks a method of analysis that proves fraud will be found once the total data is reviewed.

As to the future, I really do like the Venezuela method of two receipts, one retained by the polling place so that the vote can be audited.

Thanks for the reply :-)
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 02:51 PM
Response to Reply #16
17. Just a word on data:
All the questionnaire responses that were used for the crosstabs on election night are publicly available, and have been since January. This is what Ron refers to as the Roper data, and amounts to a random sample of rather more than half the total data collected. The remainder of the data was used only for totalling tallies on the presidential vote question, and not for the crosstabs, for which full questionnaire response, clearly, had to be used.

The Roper data includes answers to all questions, including age, race and sex of the respondents. However, no precinct identifiers are given, nor precinct vote totals, as this would allow precincts to be identified. The reason is that because of the detail on respondents given, in some cases, respondents could be identified if the precinct identity was known.

However, Election Science Institute commissioned a dataset for Ohio, in which the vote totals were "blurred". The data also included exit poll response data for each presidential candidate from the full set of answers to the presidential vote question. The "blurring" of the vote-totals was done according to a method developed by Fritz Scheuren, who was one of the authors of the study. It involved banding all Ohio precincts according to vote share, and taking the mean of each band. The NEP precincts were then given the mean of the band into which they fell. This means that the vote count is not a unique identifier, but that the dataset behaves in a statistically valid fashion.

Their paper is currently undergoing peer-review. Clearly it is not an "independent" study, in the sense that the data collection and preparation was done by people at E-M (Scott Dingman and Warren Mitofsky) but the data analysis was done by ESI. My understanding is that they hope that more datasets, for other states, can be done in the same way.

About "resistance to trying to prove fraud": I think it is clear that Mitofsky was pretty confident from the outset that polling problems had been responsible for the discrepancy. Whether he should have been so confident is another issue. It leaves him open to the charge of biased analysis, but not necessarily of resistance to proving fraud.

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 03:37 PM
Response to Reply #16
18. as usual, there are separable issues here
It's hard to generalize about "the real professionals." I would say that a lot of real professional political scientists take election integrity concerns very seriously; that some of them have looked at the 2004 data and generally so far have been unconvinced that they are anomalous; and... well, hey, some of them are really good people and should be cultivated as allies. But of course I tend to like political scientists, as an occupational hazard.

Not to rehash all the ground covered by Febble-- Some more results can and IMHO should come out of the exit poll analysis without jeopardizing privacy, although I oppose releasing any more precinct identifiers than independent researchers have already pried loose. I don't think the exit poll analysis is going to yield much more for the fraud debate than it already has. If massive fraud occurred, we will have to -- and probably should be able to -- find the evidence in the election returns, and ultimately (in many cases) in the original ballots. Honestly, I am surprised and disappointed that those who do believe that the exit polls point to fraud haven't worked harder to persuade their critics by using other data or at least suggesting tests that could actually be conducted on the exit poll data.

(I would say I am the optimist who thinks that in fact, the 2004 election wasn't stolen, and any improvements in data or methods will confirm that judgment. But that is just an opinion, not a Position -- and a lot of the policy agenda really doesn't hinge on that empirical debate.)

Looking forward, yes, I think some auditable paper trail is certainly required, at least based on my reading of the computer science debate to date.
Printer Friendly | Permalink |  | Top
 
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 03:49 PM
Response to Reply #18
19. without precinct you can not get to type/maker of machine - or can you?
It is just too easy to reverse the vote for any to assume that it was not done.

The 18181 and other cute datum points to someone having fun - IMHO.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 07:51 PM
Response to Reply #19
20. depends on how much you want on type/maker
The E/M report already broke the results out by broad type of machine. Overall, the highest error rate was for mechanical voting machines (aka lever machines), but E/M makes a decent case -- unfortunately without inferential stats that would help us to assess it -- that any apparent voting-method effects are probably largely an artifact of urban/rural differences. Based on other research, there probably should be some small equipment effects on WPE in at least some circumstances, but if there are, it appears that they would be very hard to detect in these data.

It's conceivable that breaking out Diebold would lead to a different result, but this seems unlikely because we can do that exercise with actual election results -- which gives us much more statistical power, since we aren't limited to exit poll precincts. In the analyses I've seen, there is no support for a Diebold effect in 2004. Mind you, I'm not vouching that Diebold is secure, or even that Diebold wasn't hacked somewhere in 2004.

On 18181 -- I dunno, but if I were hand-rigging (in any sense) data for every race in every precinct in a bunch of Texas counties (or nationwide?) in 2002, I don't know that I would bother to make some of the totals add up to 18181 just for the fun of it. It's like the claim that someone went through Cuyahoga County in 2004 applying "randomization factors," yet occasionally "reused" them in consecutive precincts to "save time" (I don't know if those are all literally quotations, but that is the idea as I understand it). The former entails doing extra work practically in an attempt to get caught, which might make sense for a kooky serial killer -- or if the people who stole the election decided to save some money by hiring a teenager to do the coding -- but isn't something I would have expected a priori. (The latter seems even weirder to me, although someone may have offered a rationale somewhere. If one is applying "randomization factors," aren't there perfectly good random number generators that would save a whole lot more time than just reusing some of the factors?)

It looks like degenerate data mining to me: given enough data, there will always be strange but meaningless anomalies. The anomalies I "like" are ones that either fit an a priori hypothesis, or lead to some confirmatory detail. For instance, in several of the Cuya precincts where third-party candidates got bizarre number of votes, the candidates who got them are the ones you would expect in those specific multiple-precinct polling places if Kerry-voter ballots in one precinct were counted in the other. I find that convincing.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 09:22 AM
Response to Reply #4
8. papau, if you have math skills
and I believe you have, you may like to reconsider:

"does not mean squat since it it simply accepts the Mitofsky analysis that has been shown to be based on bad logic and does not reflect the release last week of a real analysis of the precinct level data with bias analysis."

The "bad logic" I assume is a reference to Dopp's paper

http://electionarchive.org/ucvAnalysis/US/exit-polls/ESI/ESI-hypothesis-illogical.pdf

and the "real analysis", I assume is a reference to the Baiman Dopp paper

http://electionarchive.org/ucvAnalysis/OH/Ohio-Exit-Polls-2004.pdf

Have you actually read them? What convinces you that they are valid?
Printer Friendly | Permalink |  | Top
 
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 09:55 AM
Response to Reply #8
9. I read them, tried to understand, and "feel" better with Baiman - but the
time spent was only a few hours and 90% of my opinion is simply what I suspect is the right answer as opposed to any deep analysis.

The Mitofsky analysis (Dopp's paper) does not ring true as a means of checking for fraud - just my opinion - and assumes that the method of fraud was not all precincts or all machines and indeed involved different methods in different areas.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 10:21 AM
Response to Reply #9
11. Well, as one closely involved
in the Mitofsky analysis, it wasn't a means of "checking for fraud". It was a test of a particular hypothesis concerning the role of hypothesised fraud in producing the exit poll discrepancy.

You are right that if you assume that fraud was uniform across all precincts, states, voting machines etc, then the analysis wouldn't show up fraud, but that would be a very great assumption. If we assume that fraud was varied in magnitude and/or prevalence, then it ought to produce a swing-shift correlation, and it doesn't.

There are a number of types of fraud that it wouldn't detect, especially the kinds of electoral injustice we know occurred in Ohio, including under-supply of voting machines to Democratic precincts in Franklin County, or problems that affected Kerry and Bush votes equally but which may have been more prevalent in Dem precincts. Also, a fiendishly clever vote-switching algorithm that ensured that vote-switching only prevented Bush doing badly, and cut out when he was doing OK, might fool the plot. But I'm not convinced that such a thing is possible or likely.

But thanks for your candid response re 90% of your opinion! The paper is hard to read, so if you trust the authors, like the conclusion, and skip to the bottom line, then it may leave you convinced.

But I refer you to my critique here:

http://www.democraticunderground.com/discuss/duboard.php?az=show_mesg&forum=203&topic_id=409512&mesg_id=409940

Although it is only fair to issue a PG certificate. My critique is a hatchet job. There are others on the same thread, as well as contributions by Baiman. Most of Dopp's were deleted.
I'm afraid the :popcorn: is cold.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 08:33 AM
Response to Reply #1
3. Shy Tories
http://www.strath.ac.uk/Other/CREST/p56.htm

Interestingly, the "shy Tory" phenomenon in pre-election polls seems have been partially ameliorated by using telephone rather than face-to-face interviews.

Does that suggest that face-to-face interviews (which would include exit polls) are more prone be biased against "shy" voters than telephone interviews?

In which case, the advantage held by exit polls over pre-election polls in eliminating "don't knows" might tend be offset by increased "shyness" effects.

Whatever.

The point is that math applied to polls doesn't "prove" election theft, because polls are prone to non-sampling error of all kinds. I can't imagine a poll using face-to-face interviews in a place like Palestine is likely to be free from bias.

Let's see: would I dare tell an exit pollster in Northern Ireland that I'd just voted for Sinn Fein? Would I tell dare an exit pollster that I hadn't?
Printer Friendly | Permalink |  | Top
 
papau Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 08:49 AM
Response to Reply #3
5. Hi febble - thanks for posting on this thread- your point re math app
Edited on Thu Jan-26-06 08:49 AM by papau
applied to polls doesn't "prove" election theft is well taken.

The Florida demo showed how it could have been stolen and the math just suggests that it was.

A court would require more.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Jan-26-06 09:16 AM
Response to Reply #5
7. Hi, Papau
Edited on Thu Jan-26-06 09:17 AM by Febble
I agree that the polls are a fire alarm.

They are faulty fire alarms, of course, but in a safe system (like the UK) if the alarm goes off, you know that a real fire is unlikely, so you assume that the alarm is faulty.

In a system like the US, if the alarm goes off, you've got to run for the exits anyway, because you know the place is a fire hazard.

I happen to think there were small fires AND a faulty alarm in 2004, rather than a large fire that triggered the alarm. YMMV.

But people were right to run for the fire-exit anyway.

And in any case, you've got to deal with the fire hazard.


(edited in an attempt at greater clarity)
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Tue Apr 30th 2024, 06:17 PM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Election Reform Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC