Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

TIA: A Reply to DUers who question my probability assumptions/calculations

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Election Reform Donate to DU
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 01:34 PM
Original message
TIA: A Reply to DUers who question my probability assumptions/calculations
Edited on Fri Nov-24-06 01:57 PM by mom cat
TIA: A Reply to DUers who question my probability assumptions/calculations

In this post, I will address the criticisms leveled at the data and assumptions used here:
http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=364x2775205

Specifically, they were:
1) the 1.50% Margin of Error (MoE),
2) the 60% undecided vote allocation (UVA) and
3) that using 116 Generic Polls (Sept. 2005- Nov.6, 2006) as a basis for
projecting the vote was not valid because the respondents were not asked:
If you could vote for your district representative today, who would you
vote for, the Democrat or the Republican? DUers suggested that instead they
were asked: which party would you prefer to see controlling congress?

Let's first dispense with the third item. I checked the pollingreport.com
site from which I obtained the data. ALL 116 polls DID in fact ask the
question: Who will you vote for in your district, the Dem or the Rep?
http://www.pollingreport.com/2006.htm

To confirm this, I compared the first 20 polls in the POLLING DETAIL LIST
to the 116-POLL SUMMARY LIST TABLE. In the detail list, 16 pollsters asked
"Who will you vote for in your district, Dem or Rep"? Only 4
(NBC, LA Times, Cook and Quinnipac) asked "Which party do you want to
win the Congress"?

THE 4 POLLS WERE NOT INCLUDED IN THE 116-POLL SUMMARY TABLE.
THE OTHER 16 POLLS WERE.
THEREFORE, THE USE OF THE 116-GENERIC POLLS WAS VALID,

Pollingreport.com kept the apples away from the oranges.

These are the latest Generic Polls listed among the 116:
FOX News/Opinion Dynamics Poll. Nov. 4-5, 2006.
N=900 likely voters nationwide. MoE ± 3.
"Thinking ahead to this November's elections, if the congressional
election were held today, would you vote for the Democratic candidate in
your district or the Republican candidate in your district?" If
unsure: "Well, if you had to vote, which way would you lean?"

CNN Poll conducted by Opinion Research Corporation. Nov. 3-5, 2006.
N=934 registered voters nationwide (MoE ± 3); 636 likely voters (MoE ± 4).
"If the elections for Congress were being held today, which party's
candidate would you vote for in your congressional district: the Democratic
Party's candidate or the Republican Party's candidate?" If unsure:
"As of today, do you lean more toward the Democratic Party's candidate
or the Republican Party's candidate?"

USA Today/Gallup Poll. Nov. 2-5, 2006.
N=1,362 registered voters nationwide (MoE ± 3);
1,007 likely voters (MoE ± 4); 801 regular voters (MoE ± 4).
"If the elections for Congress were being held today, which party's
candidate would you vote for in your congressional district: the Democratic
Party's candidate or the Republican Party's candidate?" If unsure:
"As of today, do you lean more toward the Democratic Party's candidate
or the Republican Party's candidate?"

ABC News/Washington Post Poll. Nov. 1-4, 2006.
N=1,205 adults nationwide. Fieldwork by TNS.
"If the election for the U.S. House of Representatives were being held
today, would you vote for the Democratic candidate or the Republican
candidate in your congressional district?" If other/unsure:
"Would you lean toward the Democratic candidate or toward the
Republican candidate?"

Pew Research Center
for the People & the Press Survey conducted by Princeton Survey
Research Associates International and Schulman, Ronca & Bucuvalas.
Nov. 1-4, 2006. N=2,369 registered voters nationwide (MoE± 2.5);
1,795 likely voters (MoE ± 3). LV = likely voters. Except where noted,
results below are among registered voters.
"If the 2006 elections for U.S. Congress were being held TODAY, would
you vote for the Republican Party's candidate or the Democratic Party's
candidate for Congress in your district?" If other/unsure: "As of
TODAY, do you LEAN more to the Republican or the Democrat?"
.
Newsweek Poll
conducted by Princeton Survey Research Associates International.
Nov. 2-3, 2006. N=1,206 adults nationwide (MoE ± 3), 1,045
registered voters (MoE ± 4), 838 likely voters (MoE ± 4).
"Suppose the elections for U.S. CONGRESS were being held TODAY. Would
you vote for the Republican Party's candidate or the Democratic Party's
candidate for Congress in your district?" If other/unsure: "As of
TODAY, do you LEAN more toward the Republican or the Democrat?"

Time Poll conducted by Schulman, Ronca & Bucuvalas (SRBI) Public
Affairs. Nov. 1-3, 2006. N=679 likely voters nationwide.
" If the election for Congress were being held today, would you
be more likely to vote for (did you vote for) the Republican candidate or
the Democratic candidate in the district where you live?" If
unsure: "As of today, do you lean more toward the Republican Party's
candidate or the Democratic Party's candidate for Congress?"
.
CBS News/New York Times Poll. Oct. 27-31, 2006. N=598 likely voters
nationwide. MoE ± 4 (for all likely voters).
"If the 2006 election for U.S. House of Representatives were being
held today, would you vote for the Republican candidate or the Democratic
candidate in your district?"

DATA > ASSUMPTIONS > ANALYSIS > LOGIC
Given that the 2006 pre-election Generic polls and recorded vote tally are accurate, the assumptions plausible and the mathematical analysis flawless , then the logical conclusion is that there was an astronomically high probability that the 2006 elections were rigged in favor of the Republicans. However, the GOP could not overcome the Democratic Tsunami and steal enough votes to win the House. But the fraud appears to have been sufficient to cut the Democratic majority by almost half to 27 seats (231-204), when compared to the projected majority of 49 (242-193). At least 11 seats appear to have been stolen. FL-13 is just one example.

THE GENERICS WERE PRE-ELECTION POLLS, NOT EXIT POLLS
There were quite a few DUers who did not focus on the fact that the 116
Generic polls were pre-election polls, NOT exit polls. Any discussion about
reluctant responders, false recall or other exit poll bias is irrelevant.
This was NOT an Exit Poll analysis. Any posts which referred to exits
should never have entered the discussion. The purpose of the analysis was
to compare 116 Generic polls to the actual vote.

Some made the claim that Generic polls are not useful for projecting votes.
If that were so, WHY do a Generic poll at all? Why did polling blogs cover
them at all. These were not exit polls which attempt to analyze voting
demographics. A Generic Poll asks one very specific question: if the
election was held today in YOUR congressional district, who would YOU vote
for, the Democrat or the Republican? What could be more clear? What could
be more specific? Those who claim that polling organizations conduct
Generic Polls for anything OTHER than projecting the final vote count are
really reaching. All 116 polls in the list asked the question: "Who
will you vote for in your district, Dem or Rep"?

THE PROJECTED DEMOCRATIC VOTE SHARE
The projected Democratic vote share was based on the trend line of ALL 116
Generic polls taken from Sept. 2005 up to Election Day. The Democrats won
ALL 116 polls by an average 13.24% margin, 51.84D-38.60R, with 2% going to
3rd party candidates. I allocated 60% (UVA) of the 7.56% undecided voters
to the Democrats. The final projection was 56.34D-41.62R, a 14.72% margin.

THE MOE ASSUMPTION
One DUer made the following statement regarding the analysis: "Nowhere
does it show how that polling sample would be 25% less variable than a
standard poll, which almost always round DOWN (due to sample size) to 2%.
That would inflate the distribution of the final analysis by a factor of
4/3 which moves the tails farther from the mean. Since the distributions
are not linear, at that end of the curve, it could move the probabilities
out by a factor of more than 100".

Here's why the 1.5% MoE was justified: I used the FINAL COMBINED 10 polls
to calculate the MoE. There were 1000 sampled in each poll. Ten (10)
INDEPENDENT Final Generic Polls are essentially equivalent to ONE poll of
10,000 sample size. The MoE for a 10,000 sample is near 1.0%. So the 1.5%
MoE assumption was a conservative one.

The formula used to calculate MoE is:
MoE = 1.96*standard error = 1.96*SQRT((1-p)*p)/n)
For p=.56 and n= 10,000 sample-size, MoE = 0.97%.

AVERAGING POLLS
Generic polls are designed to sample representative congressional
districts. They all ask the same question. Calculating an average trend
line or arithmetic mean gives us greater confidence that the sample mean is
close to the true population mean. Is there anyone who will question the Law
Of Large Numbers and the Central Limit Theorem.

This needs repeating: we are analyzing the discrepancy between a 116
pre-election poll trend line (adjusted for the undecided vote allocation)
and the reported vote. THIS WAS NOT AN EXIT POLL ANALYSIS. EXIT POLL BIAS
IS NOT AN ISSUE.

UNDECIDED VOTER ALLOCATION
A DUer said: "Secondly, there is a broad unsupportable assumption that
60% of the undecideds would vote dem. Since the dems didn't get 60% of the
total vote, that is an assumption for which there is ZERO basis in fact.
Since the "undecideds" were a statistically significant portion
of the sample, the entire rest of the analysis hinges COMPLETELY on that
assumption".

Both statements are false:
1) In a study of 155 elections, the CHALLENGER won the undecided vote in
82% of them; the incumbent won 12%. So there was indeed a significant basis
in fact that the Democrats would win 60% of undecided vote. After all, they
led the average 2-party Generic poll with 57%. Why was it a reach to assume
they would win 60% of the undecided? In the 2006 election, there was a
strong incentive to kick the Republicans out. It was a referendum on Bush
(33% approval) and Iraq.

Read about the 155 election study here:
http://www.pollingreport.com/incumbent.htm

2) It's a misnomer to suggest that the ENTIRE analysis hinged on UVA.
The 60% UVA assumption was in fact a conservative one. The average
Democratic 2-party Generic vote was 57.3%. Even assuming a totally
implausible 50/50 undecided voter split, the probability that the election
was fair is close to zero(see the Sensitivity analysis table below).

Lou Harris, a world-class pollster with 40 years experience, said this in
2004 regarding the late undecided vote:
http://www.harrisinteractive.com/harris_poll/index.asp?PID=515

PROJECTION = TREND+ UVA
Assuming a 60% UVA, the model projected a 14.76% Democratic margin.
Calculating the projected vote share:

.........Trend+... UVA = Projection
Dem:.. 51.84+.. 4.54 = 56.38%
GOP:.. 38.60+.. 3.02 = 41.62%
Other:. 2.0%

Assuming a 50/50% UVA split, which is completely unrealistic in light of
the the historical evidence, the projected 13.24% Democratic margin is
equal to the trend before UVA:

.........Trend+... UVA = Projection
Dem:.. 51.84+.. 3.78 = 55.62%
GOP:.. 38.60+.. 3.78 = 42.38%



The DUer commented: "Without acquiring the whole data set and doing a
more supportable analysis, I would estimate the final conclusion to be off
by a factor of AT LEAST 100,000".

To disprove this statement, I recalculated the probability of the
Democratic vote discrepancy for the 50/50 UVA split. This resulted in a
5.3% discrepancy between the Democratic projected vote (55.6%) and the
reported vote (51.3%).

Probability = 8.292E-09 = NORMDIST(0.513,0.5562,0.015/1.96,TRUE)
or 1 in 120,604,893

The probability was reduced from 1 in 76 billion to 1 in 120 million. The
reduction factor is 633 which is much lower than the 100,000 estimate.

AVERAGING THE POLLS
It makes perfect sense to average the Final 10 Generic polls.
This is implicit recognition of the Law of Large Numbers.
Polling blogs average the latest polls for a more accurate estimate.
Pollsters know the Law.

Real Clear Politics averaged the latest 3-5 House and Senate polls:
http://www.realclearpolitics.com/epolls/writeup/election_2006-21.html

Pollster.com averaged the Final 8 Generic polls.
The Democrats had an average 11.6% margin.
http://www.pollster.com/blogs/


THE FINAL 10 GENERIC POLLS
Let's now focus on the Final 10 Generic polls.
The 10-poll average was 52.2D-39.6R, a 12.6% margin.
...................DEM..... GOP... Margin
Average........ 52.2.... 39.6... 12.6

CNN.....1029... 53...... 42..... 11
NBC.....1030... 52...... 37..... 15
CBS.....1101... 52...... 33..... 19
Nwk.....1103... 54...... 38..... 16
TIME....1103... 55...... 40..... 15
.
Pew.....1104... 47...... 43...... 4
ABC.....1104... 51...... 45...... 6
USA.....1106... 51...... 44...... 7
CNN.....1106... 58...... 38..... 20
FOX.....1106... 49...... 36..... 13

Using the final 10-polls and assuming a 60% UVA and 1.50% MoE, the
probability of the Democratic vote discrepancy is 1 in 1.3 Billion. Compare
the probability to the 1 in 76 Billion probability using the 116-poll trend
line. The difference is due to the lower average Democratic margin. Three
of the final ten polls (Pew, ABC and USA Today) appear to be outliers when
compared to the rest.

UNCERTAINTY IN UVA AND MOE
Now we will address the uncertainty in UVA and MoE using SENSITIVITY
ANALYSIS. The COMBINED MoE for the latest 10-polls (10,000 sample-size) is
1.0%. This is a theoretical, formula-based MoE. It's the one which SHOULD
be used in the probability calculation.

The 1.0% MoE results in a probability of 1 in 450 TRILLION, EVEN ASSUMING A
50/50 SPLIT IN THE UNDECIDED VOTE. Although the probability is
MATHEMATICALLY CORRECT, given the MoE and UVA assumptions, it will surely
invite even more derision than the 1 in 76 billion probability estimate
based on the 1.50% MoE in the original analysis.

SENSITIVITY ANALYSIS
Calculate the probability of the Democratic vote discrepancy from the
average of the latest 10 Generic Polls for various MoE and UVA.
.
10-poll MoE is 1% (10,000 sample)
UVA: Undecided voter allocation to Democrats
.
UVA.... 50%..... 55%.... 60%.... 65%.... 70%.... 75%
Margin.. 4.0%.... 4.3%... 4.6%... 4.9%... 5.2%... 5.6%
.
MoE Probability: 1 in X (in table)
1.00% .. 450t .... nc .... nc .... nc .... nc .... nc
1.25% .. 5.6b ... 142b ... 4.5t . 183t . 9007t .. nc
1.50% . 11m .... 111m ... 1.3b . 176b ... 264b .. 4.8t
1.75%.. 267k .... 1.4m .. 8.7m .. 59m ... 454m .. 3.9b
2.00% . 23k ..... 83k ... 334k .. 1.4m .... 7m . 37m
.
2.25% .. 4k ...... 11k .... 35k .. 114k .. 399k .. 1.5m
2.50% .. 1.2K ... 2.7k ... 6.8k ... 18k ... 50k . 147k
2.75% .. 459 .... 940 ...... 2k ... 4.5k .. 11k .. 26k
3.00% .. 223 .... 411 ..... 787 ... 1.6k .. 3.2k .. 6.9k

nc: not-calculable, t:trillion, b:billion, m:million, k:thousand

PLAYING WHAT-IF
From the table, assuming a 2.0% MoE and a totally unrealistic 50% UVA, the
probability is 1 in 23,000 that the election was NOT rigged. Assuming a 65%
UVA, the probability is 1 in 1.4 million.

Assuming the combined 10-poll 1.0% MoE and a 50% UVA, the probability that
the election was NOT rigged is: 1 in 450 TRILLION.

Did we use a theoretically correct MoE? Yes. It's 1% for a 10,000 sample.
Was it a conservative UVA assumption? Yes, without a doubt. It's 50%.
Does it compute? Yes. It's 1 in 450 TRILLION.
Do you believe it? No?
Do you believe Florida-13 was stolen?

If you want to use a different set of assumptions, check the probability
matrix. It is strong circumstantial evidence of fraud. It confirms the many
reported incidents of missing votes and voting machine "glitches"
which ALWAYS seem to favor Republicans.

Who still believes that the 2006 election was NOT rigged? Just based on the
thousands of incidents reported, there is a high probability of fraud. The
only question is the MAGNITUDE of the fraud, not WHETHER there was fraud.
The above analysis ESTIMATED the extent of the fraud in percentage and
probability terms. It was based on publicly available DATA, a few
reasonable ASSUMPTIONS and the use of APPLIED MATHEMATICS.

THE TRACK RECORD
My 2004 election model exactly matched the 12:22am National Exit Poll.
I have shown that the Final 2004 NEP is a mathematical impossibility.
The impossible Final NEP was matched to the recorded vote.
What does this tell you about the recorded vote?
Was the 2004 election rigged?

My 2006 House election model projected that the Democrats would gain at
least 42 of 61 GOP-held House seats in a fraud-free election; they've
gained 29 so far. The model also projected that about 15 of the 61 seats
would be stolen. The Senate model projected that the Dems would win 6
seats. They've won 6, but at least two (MT and VA) were almost stolen.
Were the 2006 mid-terms rigged?

My independent analysis has closely matched that of Steve Freeman, Ron
Baiman, Kathy Dopp, Jonathan Simon, Bruce O'Dell, Michael Keefer, Bob
Fitrakis, RFK Jr., Greg Palast, John Conyers and others. I'm in good
company.

My models may not be perfect, but they have proven quite accurate.

FINAL COMMENTS
Your comments are welcome - if they critique the analysis.
Don't refer to the analysis as "crap". You just demean yourself.


Don't criticize my usage of Excel.
Excel can do everything - if you know how to use it.

Don't expect that this post will be peer-reviewed.
Interested parties can review it on their own.

Most pollsters/bloggers never consider the possibility of fraud.
They always assume the vote count is accurate and the polls are off.
Not a good assumption, especially when a Bush is running.

I've been analyzing elections since the first Bush/Scotus theft.
I've been an analytic software developer in engineering, finance and
investments for more years than many DUers have lived.





HERE'S A COMPREHENSIVE ELECTION 2004 SITE:
POLLING DATA, ANALYSIS, DISCUSSION
and...
THE EXCEL INTERACTIVE ELECTION MODEL
http://www.truthisall.net/

Downloads in a minute (4mb)
Easy to use (3 inputs)
Press F9 to run 200 simulations
Pre-election/exit polls
(51 State & 18 National)

A challenge to all those who still believe Bush won:
Use the National Exit Poll
"How Voted in 2000" demographic
("NatExit" sheet) to come up
with just ONE plausible Bush win scenario.

Note the feasibility constraint:
The maximum ratio of Bush 2000 voters to the total 2004 vote is 39.8%
(48.7mm/122.3mm)

Post the scenario on the Election Forum at ProgressiveIndependent.com and/or Democratic Undergound.com



View the original 11/1/04 election model forecast of Kerry winning
51.63-51.80% of the 2-party vote:
http://www.geocities.com/electionmodel/



Printer Friendly | Permalink |  | Top
Karenca Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 01:43 PM
Response to Original message
1. k --- r . NT
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 07:42 PM
Response to Reply #1
90. TIA: Data Analysis Update
This is an 11/25 data/analysis update for the Final 10 Generic (GP) polls.

1)The previous version included an NBC poll which did NOT ask the question:
who who will you vote for district representative, the Democrat or the
Republican? It was replaced by the Harris poll. The 10/29 CNN poll was
replaced by the 10/30 AP poll. Now 10 distinct pollsters are represented.

2) New: GP poll sample-size, calculated MoE, standard deviation, variance.
3) Projected 2-party GP Dem and GOP average vote shares.
4) Dem vote share, deviation and margin corresponding to UVA.
5) The 56.1% UVA assumption was added. This UVA exactly matched the pre-UVA
projected Dem vote share. Post-UVA vote shares do not change.

Assuming a 56.1% UVA, these are the probabilities for various MoE
assumptions:
MoE....Probability
1.5%: 1 in 5.4 billion
2.0%: 1 in 766 thousand
2.5%: 1 in 12 thousand
3.0%: 1 in 1.2 thousand

Note: There has been criticism of my contention that the calculated 1.0%
MoE for the combined 10-poll sample-size of 9409 is theoretically sound.
But that argument ignores the often-used poll-of-polls methodology to
derive a closer approximation to the population mean. The Generic polls
were independent and each one sampled from the same population at nearly
the same point in time.

Although sample-size, MoE, standard deviation and variance obviously
differed in each poll (see below), it is pure nit-picking to assume that
the aggregate 10-poll sample MoE is not a valid estimation parameter due to polling variance. The combined poll-of-polls is not only intuitively
sound,it's an implicit recognition of the Law of Large Numbers. And the
elegant Central Limit Theorem is icing on the cake. Even if the theoretical UNDERLYING DISTRIBUTION of each polling sample differed (which they don't), in the LIMIT the SAMPLE DISTRIBUTION OF THE MEAN IS NORMALLLY DISTRIBUTED AND CONVERGES TO THE POPULATION MEAN AS THE NUMBER OF SAMPLES (i.e. POLLS)INCREASE.

Final 10 Generic Polls
..................DEM ... GOP .. Size ... 2-pty .... MoE ..... Std ... Var
Total . Poll ... 52 .... 38.7 . 9409 ... 57.33%. 1.00% . 0.51% . 0.28%
Avg ... End ... 52 .... 38.7 .. 941 ... 57.38%. 3.27% . 1.67% . 0.028%

Har ... 1023 .. 47 .... 33 .... 795 ... 58.8% . 3.42% . 1.75% . 0.030%
AP .... 1030 .. 56 .... 37 .... 970 ... 60.2% . 3.08% . 1.57% . 0.025%
CBS ... 1101 .. 52 .... 33 .... 598 ... 61.2% . 3.91% . 1.99% . 0.040%
Nwk ... 1103 .. 54 .... 38 .... 828 ... 58.7% . 3.35% . 1.71% . 0.029%
TIME .. 1103 .. 55 .... 40 .... 679 ... 57.9% . 3.71% . 1.89% . 0.036%

Pew ... 1104 .. 47 .... 43 ... 1795 ... 52.2% . 2.31% . 1.18% . 0.014%
ABC ... 1104 .. 51 .... 45 ... 1201 ... 53.1% . 2.82% . 1.44% . 0.021%
USA ... 1106 .. 51 .... 44 ... 1007 ... 53.7% . 3.08% . 1.57% . 0.025%
CNN ... 1106 .. 58 .... 38 .... 636 ... 60.4% . 3.80% . 1.94% . 0.038%
FOXC . 1106 .. 49 .... 36 .... 900 ... 57.6% . 3.23% . 1.65% . 0.027%

Und ... 7.30%
UVA .. 56.10%

..................DEM ..... GOP ... Total .... Margin
......... Avg .. 52.0% . 38.7% . 90.7% . 13.30%
....... 2-pty .. 57.3% . 42.7% . 100% .. 14.7%

........ Proj .. 56.1% . 41.9% . 98.0% .. 14.2%
....... Vote .. 51.3% . 46.4% . 97.7% ... 4.9%
....... Dev ... -4.8% ... 4.5% . -0.3% .. -9.3%

Probability of Democratic Vote Deviation
Sensitivity Analysis
10-poll MoE: 1.0%

.................UVA to Democrats
UVA ...... 50% ..... 56.1% ..... 60% ..... 65% ..... 70% ..... 75%

Dev ..... 4.35% ... 4.80% ... 5.08% .. 5.45% ... 5.81% ... 6.18%
Vote .. 55.65% .. 56.10% . 56.38% . 56.75% . 57.11% . 57.48%
Margin. 13.30% .. 14.19% . 14.76% . 15.49% . 16.22% . 16.95%

MoE ... Probability (1 in X) of Democratic Vote Deviationm
1.25% . 219b ... 36t .. 1286t .. nc .... nc .... nc
1.50% . 151m . 5.4b .. 62b .. 1.8t ... 63t .. 3002t

1.75% . 1.8m .. 25m .. 157m .. 1.9b .. 26b .. 428b
2.00% . 99k ... 766k .. 3.1m .. 21m .. 161m .. 1.4b

2.25% . 13k ... 68k .. 207k ... 950k .. 4.8m .. 27m
2.50% .. 3k ... 12k ... 29k ... 102k .. 381k .. 1.5m

2.75% . 1.0k . 3.2k ... 6.8k ... 19k ... 58k .. 186k
3.00% . 446 .. 1.2k ... 2.2k .. 5.3k .. 13.6k .. 37k

k=thousand, m=million, b=billion, t=trillion

HERE'S A COMPREHENSIVE ELECTION 2004 SITE:
POLLING DATA, ANALYSIS, DISCUSSION
and...
THE EXCEL INTERACTIVE ELECTION MODEL
http://www.truthisall.net/

Downloads in a minute (4mb)
Easy to use (3 inputs)
Press F9 to run 200 simulations
Pre-election/exit polls
(51 State & 18 National)

A challenge to all those who still believe Bush won:
Use the National Exit Poll
"How Voted in 2000" demographic
("NatExit" sheet) to come up
with just ONE plausible Bush win scenario.

Note the feasibility constraint:
The maximum ratio of Bush 2000 voters to the total 2004 vote is 39.8%
(48.7mm/122.3mm)

Post the scenario on the Election Forum at ProgressiveIndependent.com and/or Democratic Undergound.com



View the original 11/1/04 election model forecast of Kerry winning
51.63-51.80% of the 2-party vote:
http://www.geocities.com/electionmodel/
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 12:30 PM
Response to Reply #90
114. TIA: THE NETWORK MEDIA VOTE " PROJECTION PLANNER"
THE NETWORK MEDIA VOTE " PROJECTION PLANNER"

By TruthIsAll

From the CNN VOTE "PROJECTION EXPLAINER":
http://www.cnn.com/ELECTION/2006/pages/results/mis...

"Using exit poll results, scientifically selected representative precincts, VOTE RESULTS from the AP, and a number of sophisticated analysis techniques, EMR also recommends projections of a winner for each race it covers".

The projection process is applied by ALL major networks, not just CNN.
It can be summarized:

1- create a sample of representative precincts
2- randomly select respondents for exit poll interviews
3- collect precinct vote totals after the polls close
4- collect ACTUAL VOTE TALLIES supplied by LOCAL officials
5- employ STATISTICAL MODELS based on EXIT POLL RESULTS and PRECINCT VOTE TALLIES TO MAKE ESTIMATES AND PREDICTIONS.
6- project ONE-SIDED RACES using EXIT POLLS alone.
7- project CLOSE RACES, after waiting for the ACTUAL votes to be tabulated and reported.
8- The "DECLARATION" of a winner is UP TO ELECTION OFFICIALS, NOT CNN.
9- CNN PROJECTIONS are based on their BEST ESTIMATE.

The objective of CNN and other networks has been to accurately project the FINAL REPORTED VOTE COUNT. The only problem is that the REPORTED VOTE COUNT IS HIGHLY SUSPECT. Just look at all elections since 2000 - and the 2006 FL-13 congressional race. In each of them, there has been clear evidence of fraud. In all four elections, each of the networks has projected a HIGHLY SUSPECT REPORTED VOTE AND ASSUMED THEY WERE FRAUD-FREE.

The CNN "Projection Explainer" assumes that the vote count is accurate. It is the fundamental core of the methodology used to project the vote. The reported vote count is used in conjunction with exit polls. The projection only makes sense if there was in fact a FRAUD-FREE election. CONBINING ACTUAL EXIT POLL RESULTS WITH A REPORTED VOTE COUNT UNWITTINGLY SERVES TO VALIDATE THE FRAUD AND CONTAMINATE THE RAW EXIT POLL DATA - A DOUBLE WHAMMY.

The assumption that the vote count is an accurate representation of how people actually voted assumes ZERO fraud. It relies on the good FAITH of those who COUNT the votes, like Diebold, CS&S and election officials who certify the voting machines. Not to mention malicious programmers and vote hackers.

The only way to project the TRUE vote and to eliminate FRAUD BIAS, is to use STATISTICAL MODELS designed to analyze ACTUAL RAW Exit Poll precinct data. Steps 1,2, and 5 are sufficient. Projections using REPORTED votes CONTAMINATE PRISTINE EXIT POLL PRECINCT DATA. FRAUD-FREE SCIENTIFIC ANALYSIS SHOULD FOCUS ON THE EXIT POLL SAMPLE DESIGN AND THE RAW PRECINCT DATA. SINCE THE FINAL PUBLISHED EXIT POLL IS A FORCED MATCH TO A SUSPECT REPORTED VOTE COUNT, THE EXIT POLL IS ALSO SUSPECT.

IF FINAL EXIT POLL WEIGHTINGS CAN BE PROVEN TO BE PHYSICALLY AND MATHEMATICALLY IMPOSSIBLE, AS PROVEN IN THE 2004 NEP, SO TOO MUST THE FINAL REPORTED VOTE BE IMPOSSIBLE.


The FINAL National Exit Poll is a misnomer; it's not really a poll. It's just a mechanism for MATCHING THE EXIT POLL TO THE FINAL REPORTED VOTE COUNT. This matching policy relies on FAITH that TRUE voter preference is reflected in the RECORDED VOTE - and that there was ZERO fraud.

On the other hand, the FINAL PRELIMINARY NATIONAL EXIT POLL does NOT match to the final recorded vote count. It uses the PRISTINE data collected in the poll.

If you have FAITH that our elections are FRAUD-FREE, then you can believe the CNN projection. If, on the other hand, you believe that our elections are NOT FRAUD-FREE, then you would be justified in NOT accepting network projections at face value and would be justified in assuming that the pre-election polls and uncontaminated preliminary exit polls are MUCH CLOSER TO THE TRUE VOTE.

______________________________________________________________________

http://www.cnn.com/ELECTION/2006/pages/results/mis...

PROJECTION EXPLAINER

How does CNN make election projections?

(CNN) -- To project an election, CNN and its election experts use scientific statistical procedures to make estimates of the final vote count in each race. CNN will broadcast a projected winner only after an extensive review of data from a number of sources.

CNN editorial policy strictly prohibits reporting winners or characterizing the outcome of a statewide contest in any state before all the polls are scheduled to close in every precinct in that state.

CNN will receive information from the following sources:
The Associated Press: The Associated Press will provide vote totals for each race. The AP will be gathering numbers via stringers based in each county or other jurisdiction where votes are tabulated.

Edison Media Research: To assist CNN in collecting and evaluating this information, CNN, the other television networks and the Associated Press have employed Edison Media Research (EMR). In previous elections, this firm has assisted CNN in projecting winners in state and national races. EMR will conduct exit polls, which ask voters their opinion on a variety of relevant issues, determine how they voted, and ask a number of demographic questions to allow analysis of voting patterns by group. Using exit poll results, scientifically selected representative precincts, vote results from the AP, and a number of sophisticated analysis techniques, EMR also recommends projections of a winner for each race it covers.

Collecting data

The process of projecting races begins by creating a sample of precincts. The precincts are selected by random chance, like a lottery, and every precinct in the state has an equal chance to be in the sample. They are not bellwether precincts or "key" precincts. Each one does not mirror the vote in a state but the sample collectively does.

The first indication of the vote comes from the exit polls conducted by EMR. On Election Day, EMR interviewers stand outside of precincts in a given state. They count the people coming out after they have voted and are instructed to interview every third person or every fifth person, for example, throughout the voting day. The rate of selection depends on the number of voters expected at the polling place that day. They do this from the time the polling place opens until shortly before it closes.

The interviewers give each selected voter a questionnaire, which takes only a minute or two to complete. It asks about issues that are important, and background characteristics of the voter, and it also asks for whom they voted in the most important races. During the day, the interviewer phones the information from the questionnaires to a computer center.

Next, vote totals come in from many of the same sample precincts as the exit polls after the voting has finished in those precincts. These are actual votes that are counted after the polls have closed. Election officials post the results so anyone at the precinct can know them.

The third set of vote returns come from the vote tallies done by local officials. The local figures become more complete as more precincts report vote returns. The county or township vote is put into statistical models, and EMR makes estimates and projections using those models. In addition, CNN will be monitoring the Web sites of the Secretaries of State offices to help analyze the outcome of early voting and absentee voting.

Projections

The projections for CNN will be made from the CNN Election Analysis Center at the Time Warner Center. An independent team of political analysts and statistical experts will analyze the data that will lead to the final decisions on projections.

CNN will decide when and how to make a projection for a race depending on how close the race is. In races that do not appear to be very close, projections may be made at poll closing time based entirely on exit poll results, which are the only information available when the polls close about how people voted. The races projected from exit polls alone are races with comfortable margins between the top two candidates. Projections from exit polls also take into account the consistency between exit poll results and pre-election polls. In the case of close races, CNN will wait for actual votes to be tabulated and reported. EMR may make projection recommendations to its clients, but CNN will make all final calls for broadcast.

Shortly after poll closing time, CNN may make projections using models that combine exit polls and actual votes. This happens in closer races. For extremely close races, CNN will rely on actual votes collected at the local level. These are the races that cannot be projected when the polls close from exit polls or even from actual votes collected at the sample precincts mentioned earlier. The projection for these races will be based on a statistical model that uses the actual votes. If it is too close for this model to provide a reliable projection, CNN will wait for election officials to tally all or almost all the entire vote.

What a projection call means

CNN analysts will make all projections for CNN broadcasts. When CNN's analysts project a winner in a race, whether it is based upon data from EMR or from the CNN computations, it means that when all the votes are counted, CNN projects that the candidate will win the race. A projection is as close to statistical certainty as possible, but that does not mean that a mistake cannot happen; rather, it means that every precaution has been taken to see that a mistake is not made. CNN will not "declare" someone a winner because that declaration is up to election officials. CNN will make projections based on our best estimate of how CNN expects an election to turn out.

When a lot of vote returns have been tallied, a race may be referred as "too close to call" by CNN anchors and analysts. "Too close to call" means the final result will be very close and that the CNN analysts may not know who won. For the races that are the closest, the CNN Election Analysis Desk will keep CNN viewers up to date on the state by state rules regarding automatic recounts and will report immediately on any official candidate challenge regarding the results or voting irregularities.





HERE'S A COMPREHENSIVE ELECTION 2004 SITE:
POLLING DATA, ANALYSIS, DISCUSSION
and...
THE EXCEL INTERACTIVE ELECTION MODEL
http://www.truthisall.net/

Downloads in a minute (4mb)
Easy to use (3 inputs)
Press F9 to run 200 simulations
Pre-election/exit polls
(51 State & 18 National)

A challenge to all those who still believe Bush won:
Use the National Exit Poll
"How Voted in 2000" demographic
("NatExit" sheet) to come up
with just ONE plausible Bush win scenario.

Note the feasibility constraint:
The maximum ratio of Bush 2000 voters to the total 2004 vote is 39.8%
(48.7mm/122.3mm)

Post the scenario on the Election Forum at ProgressiveIndependent.com and/or Democratic Undergound.com




View the original 11/1/04 election model forecast of Kerry winning
51.63-51.80% of the 2-party vote:
http://www.geocities.com/electionmodel/



Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 07:56 PM
Response to Reply #114
129. the good, the bad, and the ugly
Part of this is fine: it's true that weighting the tabulations to the vote count assumes that the vote count is accurate, or at least more accurate than the unweighted results would be. There is still no sign that TIA understands why the tabulations are weighted to the vote count, despite the many times that it has been explained, but whatever.

Some of it is pathologically wrong. Blind faith in polling is not redemptive.

"The only way to project the TRUE vote and to eliminate FRAUD BIAS, is to use STATISTICAL MODELS designed to analyze ACTUAL RAW Exit Poll precinct data." Well, that sounds just dandy, if TIA can guarantee that the exit poll data are unbiased. But he can't, no matter how many times he refers to them as "pristine."

"IF FINAL EXIT POLL WEIGHTINGS CAN BE PROVEN TO BE PHYSICALLY AND MATHEMATICALLY IMPOSSIBLE, AS PROVEN IN THE 2004 NEP...." Well, no. TIA should actually have to win that argument, and he can't. His coolest argument is the one based on past presidential vote, but it assumes that people accurately report their past presidential votes -- and we know that they don't. (An illustration: in the 1992 exit poll, even the unweighted results have 52.5% of respondents saying that they voted for Bush I in 1988. Extrapolating to the population, that would be about 54.8 million voters who had voted for Bush I, although Bush I received only 48.9 million votes in 1988.)

Hey, here's a wild idea: how about instead of arguing about how to "project the TRUE vote," we actually
COUNT THE DAMN VOTES?
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 10:19 AM
Response to Reply #129
134. TIA RESPONDS
TIA
I will try once again to you get you to understand what I have always said regarding the past vote question. I say this because either a) you have misrepresented the facts or b) you have not understood the logic. So let's return to the debate.

OTOH
Bush voters COULD have been less likely to respond to exit pollsters (rBr).

TIA
But that theory conflicts with the Final NEP 43-37 weightings, which indicate that Bush voters were more likely to respond.

OTOH
Well, then "false recall" COULD explain the 43-37 weightings.
Gore voters COULD HAVE lied or forgot when they said they voted for Bush in 2000. Voters like to jump on the winning bandwagon, even if it's four years old.

TIA
But Gore was the winner in 2000, NOT Bush. Gore got 540,000 more votes and we know that he had more votes in Florida - but Scotus voted 5-4 for Bush.

OTOH
But the Final Exit Poll confirmed that Bush was the winner.

TIA
Really? Let's look at the numbers: the Final NEP said that Bush 2000 voters were 43% (52.57mm) of the 122.3mm who voted in 2004. But Bush only had 50.45mm votes in 2000. About 1.75mm of them died. So the maximum Bush weighting was 39.8% (48.7/122.3). That is proof that the 43/37 weightings are mathematically impossible.

OTOH
I am not arguing with your math, just your assumptions.

TIA
You mean the assumption that 3.5% of Bush 2000 voters died? That statistic was based on the annual 0.87% death rate in 2000. Bush had 50.45mm votes in 2000. That's a fact, not an assumption.

OTOH
But this is the 2004 FINAL EXIT POLL. It's based on sophisticated scientific analysis. Gore voters COULD have lied or COULD have forgot that they voted for him. After all, it was four years ago. That's an eternity in politics.

TIA
It doesn't make a difference WHAT Gore (or Bush) voters SAID in 2004 about who they voted for in 2000. It makes NO difference whether they told the truth, lied or forgot. The only relevant question is: HOW DID THEY VOTE VOTE IN 2004? The historical fact of who they voted for in 2000 is irrelevant.

OTOH
That's true. But the Final Exit Poll is ALWAYS matched to the reported vote.
It's standard operating procedure. It's not pseudo-science, like your analysis. Who are you to argue with Edison-Mitofsky? They have 30 years of exit polling experience.

TIA
Then why do you say that their Exit Polls are usually wrong?

OTOH
I never said that. I said that the Final 2004 National Exit Poll (13660 respondents), which WAS matched to the vote, is accurate. The earlier Exit Poll (12:22am, 13047 respondents), which was NOT matched to the vote, is not accurate.

TIA
But if the votes are miscounted, and the Final NEP is matched to miscounted votes, then the Final must be bogus.

OTOH
Yes, it's possible that the recorded vote could be corrupt. But I have proved time and again that your analysis is wrong, because your assumptions are wrong. You assume that polls are pure random samples. When the book is written on how Bush gained 14 million new voters from 2000 and won in 2004 by 3 million votes, all your mathematical gyrations will be exposed as pure hype.

TIA
And who will write that book? Farhad Manjoo, with your help?

Do you still believe that exit poll non-responders are mostly Bush/Republican voters? Do you still contend that the 43/37 how voted in 2000 weightings were due to Gore 2000 voters lying or forgetting that they they voted for him? Or that the exit pollsters sought out Kerry voters? Or that the exit polling stations were far from the voting machines? Or that the sampled precincts were not representative of the voting universe? Or that Mitofsky's own data did not contradict his conclusion that Gore voters were over-sampled, when the data showed the opposite was true. Or that Febble's Fancy Function is the end-all elegant proof of why the early 2004 exit polls do not support the stolen election hypothesis? Or that Farhad Manjoo had it right when he wrote those two articles which a) demeaned my work and extolled Febble in June 2005 and b) thrashed the work of RFK, Jr. in his fully discredited June 2006 hit piece in which you were the primary source.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 10:26 AM
Response to Reply #134
135. TIA, stop putting words in my mouth
Anyone can pretend to win an argument when he argues both sides.

I'll be back to smack you around some more later.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 12:45 PM
Response to Reply #135
137. Actually OTOH
He's got you cold. It being perfectly acceptable to quote folks, and being that we find his quoting of you down pat, he obviously has won. Again. No surprise here.

Ya know, the kind of math that adds 1 and 1 and comes up with 2, is the kind of math we should be using, don't you think? Instead the math you seem to be posting here is some kind of new math where yall add 1 and 1 and still come up with nothing. Intellectual integrity? Where? Over there, with TIA, all the way!
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 03:53 PM
Response to Reply #137
141. "we find his quoting of you down pat"??
The hell it is. As far as I can tell from Google, some of those things have never been said before by anyone, much less by me. Hey, whatever.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 12:22 PM
Response to Reply #141
148. Ha
You let Google do your thinking for you? Hell, even I can remember you saying those things, probably not in those exact words but the meanings were congruent.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 06:11 PM
Response to Reply #148
160. you remember me saying all sorts of thiings I never did
Go wild. Anyone who cares, can check; anyone who doesn't, won't. Say I advocate sacrificing babies to the Diebold gods. Why not?
Printer Friendly | Permalink |  | Top
 
Kelvin Mace Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Dec-04-06 08:40 AM
Response to Reply #148
180. Well, if this is true
post the words. Links, you know, BACK UP your assertion.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 05:17 PM
Response to Reply #134
144. creatures from the black lagoon
I'm actually a fairly patient person, but having to hack my way through a Socratic "dialogue" between a tombstone and some phantom other-OTOH is not really my thing.

"...But that theory conflicts with the Final NEP 43-37 weightings, which indicate that Bush voters were more likely to respond."

Nonsense. The weighted distribution of recalled 2000 vote doesn't indicate anything about whether Bush (2004) voters were more likely to respond, any more than the 46% male/54% female weighting indicates that Kerry voters were more likely to respond because most women voted for Kerry.

"...That is proof that the 43/37 weightings are mathematically impossible."

No, it is proof (given reasonable assumptions) that less than 43% of 2004 voters voted for Bush in 2000.

"It doesn't make a difference WHAT Gore (or Bush) voters SAID in 2004 about who they voted for in 2000."

Look, this is just painfully obtuse. If it doesn't make a difference what voters said in 2004 about who they voted for in 2000, then why do you keep bringing it up?

You really seem not to understand the distinction between fact and poll answers. If the exit poll tabulation also showed only 2% of voters admitting that they had ever run a red light, would you cite that as evidence of election fraud as well?

As far as I can tell, the rest of this is the same mistakes over and over again, with some hand-waving and plenty of misrepresentation thrown in, plus a dash of ad hominem by association (oooooh! Farhad Manjoo!).
Printer Friendly | Permalink |  | Top
 
bleever Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 01:46 PM
Response to Original message
2. I hope those seeking to advocate contrary positions,
i.e. that there was no appreciable fraud, will do as much work to justify their assumptions and the validity of their calculations.

An especially well-written TIA post. I'm glad he's feeling well and working hard.

:)
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:21 PM
Response to Reply #2
9. Well, let me clarify
that I do not take a "contrary position ie. that there was no appreciable fraud". The robocalls and pushpolls alone were fraudulent, and there were clear miscounts in Florida.

But I think the case is best made by arguments that are not based on flawed assumptions, and TIAs are.

Having said that, I too am glad to see that TIA is back to his old self! I just wish he'd check out the assumptions that underlie his probability calcs.
Printer Friendly | Permalink |  | Top
 
bleever Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:39 PM
Response to Reply #9
14. If we temper the effect of the law of large numbers with caution
about within-poll error, and use a higher margin of error closer to that of the individual polls, his calculations still show that mathematically the probability of fraud in the results is still very, very high. Given that he provided different probabilities based on other MOEs, the OP already takes into account varying assumptions, and this seems to take your point into account.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:50 PM
Response to Reply #14
15. Well, that may be so
in which case why did he use the lower figure, which is clearly unsupportable, and say it was correct?

But let's accept that he doesn't think it was correct, and also accept that the polls were consistently more Democratic than the count, and consider Skinner's point here:

http://www.democraticunderground.com/discuss/duboard.php?az=show_mesg&forum=364&topic_id=2775205&mesg_id=2781808

I just see no point in advancing a bad argument for a good case, when the case is good without the bad argument.
Printer Friendly | Permalink |  | Top
 
bleever Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:14 PM
Response to Reply #15
20. I re-read that post
and respectfully disagree with Skinner that TIA starts with "completely false" assumptions in the analysis because a) people were asked which candidate they supported (albeit by party and not by name) and b) the actual "in the polling booth" experience of facing a list of real names can't be assumed to break disproportionately for either party (although in light of a national anti-incumbency mood, and the preponderance of scandals being Republican, we might have good reason to think that if anything, this would favor the Democratic candidate).

Hence I disagree that it's a "bad argument" for a good case, but I certainly respect anyone's right to argue for the odds, based on this analysis, being one out of thousands rather than billions.

Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:57 PM
Response to Reply #20
25. One other problem with the criticism regarding names vs parties
is that parties are clearly listed on the ballots right near the names of the candidates. The scenario of someone changing for whom they would vote just because they saw a name of someone they liked more in the other party IS a remote possibility, but the probability of that occurrence is rather small indeed. I seriously doubt that the frequency of that occurring would be enough to sway TIA's results by enough to make a substantial difference. I would be willing to look at any scholarly study and statistical analysis of that hypothesis.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 05:02 PM
Response to Reply #20
27. Well, if you want to calculate the odds
you can't use either the pooled number of voters, or the MoE
for one poll.  What we need is a meta-analysis. We know, from
the between poll variance, that there was non-sampling error
in those polls.  Let's assume, for the purposes of the
calculation, that that error was of a type that would have had
a net value of zero (cancelled out) and a normal distribution.
 As the standard error of in each poll was similar, we can
simply take the margin between the candidates as a measure of
effect size.

Here are the margins of the ten polls, using TIA's figures:

	Margin	dev	z	p	1/p
CNN	11.0%	-0.016	-0.29 	 0.385 	 1 in 3 
NBC	15.0%	 0.024	 0.44 	 0.669 	 1 in 1 
CBS	19.0%	 0.064	 1.17 	 0.879 	 1 in 1
Nwk	16.0%	 0.034	 0.62 	 0.732 	 1 in 1
TIME	15.0%	 0.024	 0.44 	 0.669 	 1 in 1
Pew	 4.0%	-0.086	-1.57 	 0.058 	 1 in 17 
ABC	 6.0%	-0.066	-1.20 	 0.114 	 1 in 9 
USA	 7.0%	-0.056	-1.02 	 0.153 	 1 in 7 
CNN	20.0%	 0.074	 1.35 	 0.911 	 1 in 1 
Fox	13.0%	 0.004	 0.07 	 0.529 	 1 in 2 

mean	12.6%				
st.dev	5.5%				

count   5.10%	-0.075	-1.37 	 0.086 	 1 in 12 


The mean margin is 12.6 points, and the standard deviation of
the margins is 5.5.  The third column gives the deviation of
each poll from the mean.  The fourth column is the z score of
that deviation (i.e. how many standard deviations from the
mean that poll was). The fifth column tells you the
probability of that margin occurring by chance in a random
sample of polls, given the between poll variance.  The last
column is just the inverse of the probability.

So the question is: how far was the counted result from the
mean of polls?  Well, it was 1.37 standard deviations less
than the mean.  In other words, if the error in polls was
randomly distributed about the true mean, then you'd expect to
see that kind of discrepancy 1 in 12 times, using polls drawn
at random.

So my estimate is 1 in 12.

Now, obviously we do not draw polls at random.  We have a
finite population of polls.  And the mean non-sampling error
in those polls is clearly substantial.  In other words, some
of the polls were BIASED. The problem is, we do not know which
of the polls were biased, nor by how much.  The least biased
could have been Pew, in which case the Democrats did slightly
better than the poll.  Or it could have been CNN, in which
case the Democrats did considerably worse than the poll.

But we cannot tell from these data which polls had how much
bias.  What we can tell is that some of them had some.  TIA's
own data is evidence of the kind of non-sampling error in
polls he repeatedly assumes does not exist.

But this tells us absolutely nothing about fraud in 2006. 
Which is why I wish people would stop dazzling themselves with
improbable probability estimates and concentrate on collating
the copious amounts of data that is flooding in about voter
suppression, undervotes, "glitches" and evidence
that may amount to outright fraud.

I'd like to see prosecutions for those robocalls and push
polls for a start.  
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 02:38 PM
Response to Reply #27
75. The meta-analysis is one possibility, now...
Wouldn't it be possible, IF the polls precinct locations, etc. were already known to estimate the "bias" in the sample? In other words, it would be nice to also compare some specific questions other than "are you democrat or republican", effect sizes, etc...but you know that, so this is fair but may lack power and be conservative - we don't know. The assumption of binominal normality is also a pretty big jump...but it's the only choice. Instead of means, a James-Stein indicator is useful....oh, well, it's not useful to spend too much time on this because...

I AGREE, we need to concentrate on the undervotes and other issues already available that show clear problems!

I think exit polls would be helpful to the process if they targeted places where the problems usually occur, and also asked questions that were related to the issues such "Did you see the vote for ______ on the screen?" If 20% didn't see that race and so they didn't vote, that's a problem! If only 2% report not voting and the undervote is 15% and the poll shows 13% fewer votes than recorded, that's a bigger problem!

I simply don't see those questions asked by ANYONE in ANY polls...even though it's common to report on the watchdog calls and websites. In 2000, I would not have asked those questions. In 2006, it's criminal NOT to ask such questions!

Why not? If there is no manipulation, great! If there is...the lawyers would love the poll evidence!

I understand that polls would also need to have more representatives in some places to get stable samples. :toast:
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 11:47 AM
Response to Reply #75
113. You do realise, don't you
that these data are from pre-election polls?

They aren't precinct samples. Most, if not all, will be random direct dialing telephone polls.
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 09:08 PM
Response to Reply #27
92. TIA
TIA
_______________________________________

Febble, a 1 in 12 probability?
You can't be serious.

Here are the individual probabilities for the 10 polls.
The probability is a function of the discrepancy in vote share and the MoE.
The vote shares for the actual votes and each poll were converted to their
2-party equivalent.

For example,CNN: The deviation from the Democratic 2-party poll (55.8%) to the 52.5% vote was 3.3% The standard deviation is 1.75%, which is the
MoE/1.96. The MoE is 3.43% = 1.96 * 1.75%

The probability of the 3.3% deviation is 1 in 33
Prob = NORMDIST(0.5251,0.558,0.0175,TRUE)

Actual .. 2-pty ...... 10-poll average
Dem ..... 52.51% ..... 57.0%
Rep ..... 47.49% ..... 43.0%

Diff = 2-party vote/poll discrepancy

.......Actual........2-party Poll Diff ........... Deviation
...... Dem . Rep ... Dem .. Rep ... Dem ..... StDev ............. Prob 1 in
CNN .. 53 . 42 .. 55.8%. 44.2%. -3.3% ... 1.75% . 3.02E-02 ....... 33
NBC .. 52 . 37 .. 58.4%. 41.6%. -5.9% ... 1.57% . 8.33E-05 .. 12,008
CBS .. 52 . 33 .. 61.2%. 38.8%. -8.7% ... 1.99% . 6.85E-06 . 145,889
NWK .. 54 . 36 .. 60.0%. 40.0%. -7.5% ... 1.71% . 6.02E-06. 166,237
TIME . 55 . 40 .. 57.9%. 42.1%. -5.4% ... 1.89% . 2.24E-03 ...... 446

Pew .. 47 . 43 .. 52.2%. 47.8%.. 0.3% ... 1.18% . 5.96E-01 ......... 2
ABC .. 51 . 45 .. 53.1%. 46.9%. -0.6% ... 1.44% . 3.35E-01 ......... 3
USA .. 51 . 44 .. 53.7%. 46.3%. -1.2% ... 1.57% . 2.27E-01 ......... 4
CNN .. 58 . 38 .. 60.4%. 39.6%. -7.9% ... 1.94% . 2.28E-05 .. 43,900
FOX .. 49 . 36 .. 57.6%. 42.4%. -5.1% ... 1.65% . 9.08E-04 .... 1,102


HERE'S A COMPREHENSIVE ELECTION 2004 SITE:
POLLING DATA, ANALYSIS, DISCUSSION
and...
THE EXCEL INTERACTIVE ELECTION MODEL
http://www.truthisall.net/

Downloads in a minute (4mb)
Easy to use (3 inputs)
Press F9 to run 200 simulations
Pre-election/exit polls
(51 State & 18 National)

A challenge to all those who still believe Bush won:
Use the National Exit Poll
"How Voted in 2000" demographic
("NatExit" sheet) to come up
with just ONE plausible Bush win scenario.

Note the feasibility constraint:
The maximum ratio of Bush 2000 voters to the total 2004 vote is 39.8%
(48.7mm/122.3mm)

Post the scenario on the Election Forum at ProgressiveIndependent.com and/or Democratic Undergound.com



View the original 11/1/04 election model forecast of Kerry winning
51.63-51.80% of the 2-party vote:
http://www.geocities.com/electionmodel/
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 12:39 PM
Response to Reply #92
115. Sorry, TIA, but you are wrong
Edited on Sun Nov-26-06 12:41 PM by Febble
You state:

The probability is a function of the discrepancy in vote share and the MoE.


This statement is only correct amended as follows:

The probability that the discrepancy between the vote share and each poll is due to sampling error alone is a function of the discrepancy in vote share and the MoE.


However an additional source error is non-sampling error in the polls. And the existence of this can be easily demonstrated from your own data, as I just did. But if you don't like my calculations, you can use your own formula to estimate the probability of the deviation of each poll from each other poll from if the only error in the polls was sampling error.

I think you will find that the probability is extremely small.

edited for clarity
Printer Friendly | Permalink |  | Top
 
philb Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 09:42 AM
Response to Reply #27
132. there were other specific cases of malfeasance documented as well
and serious audits of the machines and processes should be carried out in the many
races/locations with touch screen switching, switching to blank, high undervotes or more votes than voters.
A serious audit would likely determine the cause of the "irregularities" and there have been enough of them to have concern. Why aren't people insisting on finding out the cause of the consistent (non-random) switching ? Which surely could be done.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 01:35 PM
Response to Reply #132
140. Again, I agree
my point, as always, is that I see no purpose in trying to argue for audits on the basis of an easily refuted inference from polls.

For the record: I AM insisting on finding out the cause of what appears to be consistent switching in one direction, as well as for the undervotes that appear to have selectively disenfranchised Democrats in Florida.

I am also insisting (if insisting does any good) that the perpetrators of the robo-calls and push-polls are investigated and brought to justice. Again, I see no point it contaminating the argument for such an investigation with fallacious inferences from polls. Your elections are quite astonishingly corrupt. You do not need bad arguments to demonstrate it.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 05:15 PM
Response to Reply #20
30. well, no
Given Americans' long-standing propensity to 'hate Congress but love their member of Congress,' there is at least one good reason to expect that the party that controls Congress would do better in the voting booth than in generic polls.

There is also the consideration -- often repeated, consistently ignored by TIA as far as I can tell -- that polls in individual districts yielded seat projections close to the observed results (Democrats with a 30-odd seat majority).

(TIA says: ) Some made the claim that Generic polls are not useful for projecting votes. If that were so, WHY do a Generic poll at all? Why did polling blogs cover them at all.

That's a pathetic argument, frankly, given that TIA disregards pretty much everything that the polling blogs actually say about the generic ballot results. Folks might want to check out Mark Blumenthal's recent review. (They could go on to check out what various observers were saying before the election.) Note that Gallup and Pew have a pretty strong predictive track record. So, when generic ballot results are all over the yard, if anything most observers are going to pay more attention to the Gallup and Pew polls (which also have the longest field period and the largest samples) than the others.

I respect your right to argue for the odds being one out of thousands (never mind one out of billions), but I don't think you're going to convince many observers. Skinner is pretty much on target. I don't know whether TIA's assumptions are "completely false," but they are sufficiently false to make his probability calculations completely meaningless. That will do.
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:50 PM
Response to Reply #30
80. TIA: Questions for Feeble and OTOH
TIA: Questions for Febble and OTOH

1. How accurate were your pre-election House and Senate projections?
2. What methodology did you use?
3. If you did not develop a projection model, what methodology would you have used?
4. If you did not develop a model, what is your best estimate NOW as to the TRUE Generic vote margin?
5. Are you familiar with the method of combining polls (Poll-of-Polls)?
6. If you are familiar with the method, what is the theoretical basis for it? If you do not believe it is a valid method, tell us why not.
7. What is the MoE formula which you subscribe to?
8. Why do pre-election and exit pollsters even bother to include a MoE if there is non-response bias?
9. Do you still believe that Bush won fairly in 2004?
10. Do you believe that electronic fraud was not a major component of the Bush “mandate”?
11. If there was fraud in 2004 or 2006, how could it have been detected in the Final National Exit Poll, which assumed a perfectly accurate vote count?
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:59 PM
Response to Reply #80
83. TIA: And one more question for Feeble and OTOH:
Add one more question for Febble and OTOH
From: TruthIsAll
Date: Nov 25th 2006
12. What does this graph tell you?



Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 06:03 PM
Response to Reply #80
85. mom cat, it is not that hard to type "Febble"
Edited on Sat Nov-25-06 06:11 PM by OnTheOtherHand
Febble. Febble. Febble. Febble. Febble.

TIA, who are you trying to fool? You know damn well you never listen to our answers. Case in point:

"9. Do you still believe that Bush won fairly in 2004?"

That is just damned dishonest. (Edit to add: Anyone who reads the forum knows we have never said that "Bush won fairly," and have often said otherwise.)

EDIT TO ADD: and another:

"11. If there was fraud in 2004 or 2006, how could it have been detected in the Final National Exit Poll, which assumed a perfectly accurate vote count?"

C'mon. TIA himself claims to detect fraud in the final national exit poll. This is just silly, and we've been over it dozens of times.
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:49 PM
Response to Reply #9
79. TIA: Questions for Feeble and OTOH
TIA: Questions for Febble and OTOH

1. How accurate were your pre-election House and Senate projections?
2. What methodology did you use?
3. If you did not develop a projection model, what methodology would you have used?
4. If you did not develop a model, what is your best estimate NOW as to the TRUE Generic vote margin?
5. Are you familiar with the method of combining polls (Poll-of-Polls)?
6. If you are familiar with the method, what is the theoretical basis for it? If you do not believe it is a valid method, tell us why not.
7. What is the MoE formula which you subscribe to?
8. Why do pre-election and exit pollsters even bother to include a MoE if there is non-response bias?
9. Do you still believe that Bush won fairly in 2004?
10. Do you believe that electronic fraud was not a major component of the Bush “mandate”?
11. If there was fraud in 2004 or 2006, how could it have been detected in the Final National Exit Poll, which assumed a perfectly accurate vote count?
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:56 PM
Response to Reply #79
82. TIA: And one more question for Feeble and OTOH
Edited on Sat Nov-25-06 05:57 PM by mom cat
Add one more question for Febble and OTOH
From: TruthIsAll
Date: Nov 25th 2006
12. What does this graph tell you?



Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 06:07 PM
Response to Reply #82
86. I will answer this one
It tells me that TIA couldn't figure out how to incorporate dates in his "trend" analysis.

TIA, go read Charles Franklin. Conceivably you might learn something.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 01:10 PM
Response to Reply #82
117. What he said n/t
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 01:09 PM
Response to Reply #79
116. OK


  1. Didn't make any. I followed closely projections made by Larry Sabato, and the polls and discussion on Pollster.com, particularly the fascinating commentary by Prof. Charles Franklin:

    I am a professor of political science at the University of Wisconsin, where I teach statistical analysis of polls, public opinion and election results.


    This is because I recognise that interpreting polling data requires far greater expertise than my own in "statistical analysis of polls, public opinion and election results".

  2. N/A

  3. I would have looked closely at polling results for specific House and Senate races.

  4. Because of my mistrust of electronic voting, I do not have a best estimate. This is the problem.

  5. I am familiar with meta-analytical methods.

  6. I have no idea what you are referring to, but if it is your own, then I have told you in other posts (e.g. post #27) why there is a problem with it.

  7. As a rough and ready formula, the one you use is reasonable, although it will be inaccurate where p is very much larger than q, because the distribution is seriously asymmetrical at these extremes. However, I am aware, as you do not seem to be, despite the fact that you have been told gazillion times, that sampling error is not the only error in polls. The MoE computed using this formula gives you the Margin of Error-due-to-sampling-error-alone. It tells you nothing about non-sampling error. However, a meta-analysis of comparable polls will give you a clue to the kind of magnitude of non-sampling error that polls include.

  8. From Warren Mitofsky:

    I want to say a few words about reporting sampling error. A number of people who have spoken here have talked of not reporting sampling error because it was confusing all those dear mindless souls who listen to our results. They were concerned we would make people think that sampling error was the only error in the survey. I guess I am not too sympathetic with that point of view.


    In other words: Mitofsky, like other pollsters, assumed that no "dear...soul" will be so "mindless" as to be confused into thinking "that sampling error was the only error in the survey". It seems Mitofsky was wrong.

  9. No.

  10. Yes.

  11. It would have been difficult to detect in the exit poll, because of the likelihood of bias in the poll, and any analysis aimed at detection fraud (obviously) would need to be done through a comparison between the unadjusted estimates and the final count. The adjusted cross-tabulations would clearly be useless for the purpose. A key piece of evidence would have been a correlation between Bush's success and the magnitude of the discrepancy at precinct level. In other words, if vote-switching had been on anything like the scale inferred by TIA, I would have expected a marked correlation between "swing" to Bush, relative to 2000, and "redshift" in the poll. I did precisely this analysis, for precisely this reason, and found absolutely no correlation at all.

    http://inside.bard.edu/~lindeman/slides.html


Printer Friendly | Permalink |  | Top
 
philb Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 09:50 AM
Response to Reply #116
133. A larger EP effort and larger audit effort should be carried out and would offer more
Edited on Tue Nov-28-06 09:52 AM by philb
firm evidence on some of the issues being discussed here, and support more transparent and fair elections. There needs to be some way to do this in a non-parisan manner or with sufficient checks and balances, so audits are not controlled by the party in power.
Though I also think Exit Polls are useful in this regard if well done.


But there are some obvious changes in elections process needed in most areas that would make the process more transparent and fair again. Currently we have an extremely untransparent and unrealiable system, that there is no reason to feel secure in eleciion outcomes in most areas. This can be improved greatly and needs to be.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 12:21 PM
Response to Reply #133
136. I agree, although
I still think exit polls are potentially misleading.

I don't know if you've seen this:

http://electionintegrity.org/reports/exit_poll_first_report.shtml

I think Steve Freeman makes a very good observation here:

Many people were thrilled that we were doing what we were doing, and expressed disgust or contempt at the idea of electronic voting and suspicion of the election process. These people invariably agreed to participate; and we know from other polls that Democrats are, in general, far more concerned about e-voting and election fraud than Republicans.


I think he is right, and my concern about exit polls as monitoring instruments is simply that the sample may tend to be biased in the direction of those with concerns about the count, and thus be self-fulfilling.

Which, of course, is why I am so keen on audits, but I take your point that they should not be controlled by the party in power!

Cheers

Lizzie

Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 01:03 PM
Response to Reply #136
138. Wrong assumptions, again, Febble
Because the 2004 final polls showed that bush voters were overpolled. Heck, a bunch of them it seems came back from the dead just to vote for bush again.

So the question in everyone's mind is: Why did the final exit-poll report include all those dead bush voters?

Since we will never know, we just have to assume that the exit-polls were slanted in bush's favor, and not assume, as you do, that Dems were overpolled.

Gawd, doesn't it feel good to settle that once and for all?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 01:30 PM
Response to Reply #138
139. Do you actually read my posts?
Or do you just hurl insults at random?

It's kinda hard to tell.

But the likely answer to the "question in everyone's mind is" here.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 12:27 PM
Response to Reply #139
149. What insult?
Do you mean this:....and not assume, as you do, that Dems were overpolled."

Are you now claiming that you don't assume that Dems were overpolled?

How can you even begin to think that is an insult? It is well known that your theory (unproven) is that Dems are always overpolled.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 12:35 PM
Response to Reply #149
150. No, I do not
assume the Dems were overpolled. I found evidence that Dems were overpolled.

And I found the last line of your post insulting. If it wasn't meant to insult, then I have no idea what you meant.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 12:44 PM
Response to Reply #150
151. You found evidence?
And so is that proof? I don't think so, so don't go around saying you have proof. All you are doing is assuming.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 02:05 PM
Response to Reply #151
152. Proof is for math and alcohol
Edited on Thu Nov-30-06 02:05 PM by Febble
science deals with evidence.

And evidence is not the same thing as an assumption.

Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 02:13 PM
Response to Reply #152
153. Well
We have tons of evidence that millions of votes have been stolen, so it is no longer an assumption?

Looking forward to your kind of intellectual clarity on this matter. 'Cause we sure need to clear this up. Why? 'Cause there are some who still don't think elections were stolen, that 550 is the tweak we need to keep an election from ever being stolen, and that our country truly stands for Liberty and Justice for all.

BTW: I never claimed to be an intellectual. Therefore I am at your mercy, Madam.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 02:38 PM
Response to Reply #153
156. It was never an assumption
it is evidence. Evidence is what is needed.

I agree that there is evidence that votes were stolen. There is also evidence that Democratic voters were suppressed.

There is also evidence that Kerry voters participated in the poll at a higher rate than Bush voters.

None of this is contradictory. Your election system needs radical reform - not only do you need a transparent and reliable voting system, but you also need an end to the systematic disenfranchisement of Democratic voters.

I just like to see this argument being made with evidence that stands up to scrutiny. I don't think HR 550 is the "tweak" that will stop elections being stolen. There is far more work to do than that. And I hope that now that the Democrats have won both the House and the Senate, it will either be strengthened, or replaced by a more far-reaching bill.

Anyway, that's my view. We are each entitled to our view. I just don't like mine being misrepresented.

Peace.

Lizzie
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 02:56 PM
Response to Reply #156
158. Two things
The evidence detailing the stolen votes has reached a consensus level: nearly everyone agrees.

The claims you have made about there being a bias in the polls is based on your theory, a theory on which there is no consensus, but since you claim it to be evidence we simply must label it as an assumption that needs greater testing. After all there is evidence that Bush voters participated in the poll at a higher rate than Kerry voters = Conflict.

Whereas, there is no proof that votes were not stolen: No conflict.

Consensus pretty much means there is little conflict about a subject.

Is that intellectually clear?
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 07:05 PM
Response to Reply #158
162. nearly everyone agrees what?
Wow.
Printer Friendly | Permalink |  | Top
 
truedelphi Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 02:45 PM
Response to Reply #138
157. Here's what I think about the '04 exit polls - the ONLY
group that would not have been happy to answer an exit poll is someone like my father was in 2000 - for the first time in his life, He, a Republican to the core of his being, voted for a Democrat.

Had he voted for Bush, in 00, and then been asked by an exit poller he would have announced proudly that he had (As he announced to our entire neighborhood in '60 - when we lived in a Kennedy stronghold.)

But although he could not vote for Bush in 00, he was with my mother when he left the polling place. it took two days for him to tell her. He would have never been able to confide to a stranger outside a polling place why he betrayed his party.

I know Republicans - most seem happy to announce they are Republicans, unless like in this last election, they have to betray their party to defend their country.

I know Democrats, they are happy to announce that they are Democrats - unles s they can't stomach an individual candidate and have to vote for the enemy - just this one time in this one particular election.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 03:05 PM
Response to Reply #157
159. Then there are the republicans
Who called us traitors because we dared question the government.

I knew many Dems who were afraid to admit they voted for that damn liberal Kerry. Shoot, you wouldn't believe the flack I caught for that!

Conflict: A republican meme.
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-15-06 02:25 AM
Response to Reply #157
184. And the interesting thing...
As I recall a TIA analysis, the path to those reluctant Bush responders was in the Northeast, you know that section of the USA where people just can't gather the words to say what's on their minds, where they are reluctant to stake ouit a position. Having lived there, if thats where they supposedly showed up, well, not much credibility to that argument.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-15-06 07:39 AM
Response to Reply #184
185. it's true that many of the largest red shifts were in the Northeast
For me, this is a pretty good argument against the idea that the red shifts measure fraud: hardly anyone even tries to explain why the Northeast would be an especially easy or useful place to steal a large proportion of votes. Running up the score in Connecticut and New York? Why?

As I remember it (and the survey data seems to bear it out), the strong opinions being expressed in the Northeast tended -- and still tend -- mostly to be sharply critical of Bush. There is really no way to tell for sure whether that affected the results, but it seems reasonable to me that people with the strongest opinions on either side may be especially likely to take time to fill out an exit poll. I'm not suggesting that that single factor caused all the red shift; I don't think it did. But between that hypothesis and the hypothesis that Bush stole lots of votes in Vermont and Delaware to establish "momentum," I know which one I think makes less sense.

A basic problem is that we obviously have no information about the people who didn't respond to the exit polls, as compared with those who did, so all the arguments about their motives are speculative. Folks who work with surveys all the time are used to exploring such speculative arguments and trying to figure out how to test them in the future. But in the context of an argument about possible fraud in 2004, it's pretty hopeless.

To me it makes more sense to ask, first, whether it is more likely based on evidence that we actually have that (for instance) Kerry won New York by 30 points or that the exit poll was wrong. I think the answer is pretty clear.

(autorank has put me on ignore, so presumably he will not read this response)
Printer Friendly | Permalink |  | Top
 
rock Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 01:49 PM
Response to Original message
3. I for one have not a single nit to pick
with your methods or conclusions. There is plently of statistical evidence that the elections of the last few years are illegitimate.
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:01 PM
Response to Original message
4. Go get em TIA
Thank You, Mom cat ......... KNR ........
Printer Friendly | Permalink |  | Top
 
rocktivity Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:06 PM
Response to Original message
5. I didn't need to crunch any numbers to come to your conclusion:
Edited on Fri Nov-24-06 02:08 PM by rocknation
...the GOP could not overcome the Democratic Tsunami and steal enough votes to win the House. But the fraud appears to have been sufficient to cut the Democratic majority by almost half to 27 seats (231-204), when compared to the projected majority of 49 (242-193). At least 11 seats appear to have been stolen...

but now I'm wondering if the only thing James Carville is wrong about is where to place the blame! (FYI, the seat count is now 232-200 with three races still undecided.)

:headbang:
rocknation

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:09 PM
Response to Original message
6. Well, I'm late to this party, but here is one error:
The COMBINED MoE for the latest 10-polls (10,000 sample-size) is 1.0%. This is a theoretical, formula-based MoE. It's the one which SHOULD be used in the probability calculation.


No, it is certainly not the formula that should be used. It completely ignores between-poll error. You can't simply pool all the participants as though they were participants in a single poll. Or rather, by doing so, TIA assumes that the only error in the poll is sampling error. He only needs to look at his between-poll variance to see that this cannot be the case.
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 04:30 PM
Response to Reply #6
26. "Shades of MANOVA, Batman!"
This is pretty much why we need precinct level exit poll data (as you know). We can't estimate the variance accounted by within vs. between for any particular contrast without such data at all levels of analysis.

I get small but negative correlations between the UNDERVOTE and PERCENT VOTING for Democratic candidates in most races in Florida by precinct. This is particularly true when the undervote is large (Sarasota) and it is "significant" in conservative estimates (controlling for family wise error rates). Republican candidates appear to have no relationship (near 0) with UNDERVOTE.

That doesn't PROVE (a legal term) manipulation, but it's a hint that dropped votes for a portion (randomly chosen?) democratic voters is harmful to democratic candidates.

I agree, the EXACT formula and between poll (or between precinct) variance is not available, but is there a reasonable correlation between polls? Is there a SSCP matrix or something that would allow us to pool or estimate between poll variance? Is TIA wrong or "maybe wrong" or "partially wrong"?

TIA's formula "mistake" affects power as much as anything. Regardless, even the "odds" that 100 to 1, much less 100,000,000 to 1 that an election is hacked is pretty serious. I think that political correctness among pollsters is to avoid making the accusation, and I now accept that...but what can be done by pollsters to help with post hoc analyses that would demand a different system or revote? Florida judges will USE the POLLSTER'S conclusion that they don't have PROOF of a problem to certify a hacked election....hmmm....political convention meets statistics!

The size of the probability is not as important as the conclusion and the actions we take in the future.
Printer Friendly | Permalink |  | Top
 
truedelphi Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 02:30 PM
Response to Reply #26
154. I don't have any experience evaluating voting polls- but
having spent fifteen years dealing with researchers (both independent and corporate lackeys) on the pesticide issue, I want to discuss the word PROOF.


Proof is and probably should remain something that is only assigned to a statement after months or years (maybe decades) of study. (Unless of course if it is something that is fairly obvious - toddlers left to play on freeways die often, for instance.)

Take this example, in the mid-sixties, or maybe late sixties, in America, the Surgeon General started putting warnings on cigarettes.

At that point in time, the correlation between cig smoking and lung cancer was not "proven." It would not be announced to the public as a "proven" correlation until sometime in the late nineties.

However, The Surgeon General had detected enough evidence that there was a serious reason to be concerned. Not a hint, but a serious reason to be concerned. He did the right thing -and I am sure that had he ever tried to run for office, the tobacco industry would have buried him alive with their corporate monies by saying that his decision to put the warning labels on cig packages was specious.

It is possible that this warning label saved lives. It at least informed the public that an important and relevant decision maker in our government ahd reviewed evidence and had serious reason for concern.

I would stay away from the word "hint." There may not be proof, but if you are spending time as a voting activist, and if you are witnessing things you never would have believed could happen in the United Staes, you have "serious reasons to be concerned." And that in my mind is good enough.
Printer Friendly | Permalink |  | Top
 
understandinglife Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:12 PM
Response to Original message
7. Recommended.


NOT ONE LINE OF SOFTWARE BETWEEN A VOTER AND A VALID ELECTION.
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:37 PM
Response to Reply #7
12. Exactly. We have to fight for our rights!
Printer Friendly | Permalink |  | Top
 
msongs Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:14 PM
Response to Original message
8. what is the famour Mark Twain quote about statistics? nt
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:22 PM
Response to Reply #8
10. Yeah, but TIA's statistics
aren't lies, they are just wrong.
Printer Friendly | Permalink |  | Top
 
Mr_Spock Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 10:03 AM
Response to Reply #10
60. lol
It's a shame that most people who "k&r" these threads do not have the vaguest notion WRT statistical machinations. You can be sure that 90% of the recommendations "assumed" the content must be good due to it's sheer volume. This is clearly a response to Skinner finally having enough of this crap and describing his previous analysis as an "embarrassment". http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=364x2775205#2781808

I still agree with Skinner.
Printer Friendly | Permalink |  | Top
 
Recursion Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:37 PM
Response to Original message
11. Variance is a tricky thing
Some made the claim that Generic polls are not useful for projecting votes.
If that were so, WHY do a Generic poll at all? Why did polling blogs cover
them at all.


They probably shouldn't, at least not nearly as much as they do.

Consider your sample:
...................DEM..... GOP... Margin
Average........ 52.2.... 39.6... 12.6

CNN.....1029... 53...... 42..... 11
NBC.....1030... 52...... 37..... 15
CBS.....1101... 52...... 33..... 19
Nwk.....1103... 54...... 38..... 16
TIME....1103... 55...... 40..... 15
.
Pew.....1104... 47...... 43...... 4
ABC.....1104... 51...... 45...... 6
USA.....1106... 51...... 44...... 7
CNN.....1106... 58...... 38..... 20
FOX.....1106... 49...... 36..... 13

Now, just looking at the "DEM" percentage for each, the variance of that percentage is 10.7. With a mean of 52.2, anything within 1 standard deviation (ie, between 48.9 and 55.5) is attributable to simple variance. Variance is one of the most-ignored topics in statistics, and ignoring it plagues stock market analysts, actuaries, and, yes, poll analysts. Look at the range of the polls themselves: a full 11 points. This is stochastic sampling noise, and with ~1100 sampled out of ~68,000,000 voters, that noise is always going to be loud. Now, the final tabulation I have shows 57.7 for the Democratic party in the House, which is outside 1 standard deviation of those polls but within 2 -- again, very easily attributable to sampling noise, but if anything it shows that the polls were biased towards the GOP.

If you drop the lowest and highest polls, the mean goes to 52.1, the variance goes to 4.1 and the standard devation goes to 2.0. This actually puts Democratic votes outside 2 standard deviations from the mean -- but it puts them over the top, not below.

Exit polls eliminate one avenue of noise: people who say they will vote but don't. They still suffer from a large amount of variance due to stochastic sampling noise (IMHO, we aren't allowed to see raw exit poll data because the variance is about as high as in pre-election polls).

Now, let's look at how the undecideds broke:
The mean undecided population in the polls is 7.4%, the variance is 14.8 (!) and the standard deviation is 3.8. Let's say 75% of the undecideds broke for the Democratic party. That means the Democrats should expect (keeping in one standard deviation) a bump of between 2.7 and 8.4 points, with that gap again entirely attributable to variance from the sample noise. As it is, we got a 6-point bump from the pre-election mean poll data, which is well within a single standard deviation of the undecideds breaking (at a rate of 75%, remember), and that actually eliminates the 2 standard deviations our final results were over the mean pre-election poll.

I guess what I'm getting at is, I don't *see* any missing votes in these numbers. I know there *were* missing votes because I've heard all the horror stories from different polling stations, but the fact is we came out well within a single standard deviation of the pre-election polls. If anything, that says to me these polls are very biased towards the GOP, and probably don't ask the vere same populations that get disenfranchised at the polling booth.
Printer Friendly | Permalink |  | Top
 
Mr_Spock Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 09:53 AM
Response to Reply #11
58. Thanks for taking the time to post a logical and level-headed analysis
I always get the feeling that there are certain folks who have already decided the outcome before they do the analysis, all they have to do is fit the dataset into their theory. As you probably know, most anyone who is somewhat familiar with statistical theory can "convince" the 99% who do not understand enough about statistics to question the conclusions of the poster.

Kudos for putting the effort into this reply.
Printer Friendly | Permalink |  | Top
 
Peace Patriot Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:39 PM
Response to Original message
13. As always, you earn your name, TruthIsAll!
Thanks to Mom Cat for posting!

Strategy for circumventing the entrenched, bipartisan corruption in our election system, and achieving TRANSPARENT vote counts by the '08 primaries:

1. Don't count on Congress to do anything, and do be concerned that they might make things worse. Sen. Diane ("You too can learn to love our Corporate Rulers") Feinstein heads the Senate elections committee, which can gut or veto any good legislation from the House (and even the best of the bills there are not great). Bilderberg's presidential candidate, Sen. Christopher Dodd, is counting on Diebold/ES&S to return the favor he did them with worst piece of crap legislation ever passed by a US Congress, the "Help America Vote Act" (--with the possible exception of the recent torture and suspension of habeas corpus bill). The mind-boggling silence of our party leadership, as Bushite corporations, with very close ties to Bush/Cheney and far rightwing causes took over our election system with TRADE SECRET, PROPRIETARY programming code in all the new, fast-tracked electronic voting machines and central tabulators, needs to be factored into our expectation that a somewhat Democratic Congress will fix this election system disaster.

2. During the recent midterms, there was a huge increase in Absentee Ballot voting. This BOYCOTT of the machines by the voters was an attempt to get around the rigged electronics. In Calif., 50% of the entire state voted by AB, and it was big all over the country. This is the natural constituency for election reform, and needs to be organized into AB voting groups to pressure LOCAL/STATE officials, to, a) HAND COUNT the AB votes, and b) POST the results BEFORE electronics are involved. We thus begin to create a paper ballot system by DEFAULT. These are reasonable, common sense demands, and clearly want the AB voters WANT. With such a big voter constituency, this part is very doable--at the LOCAL level. Then, as AB voters get these concessions from election officials, the thing will snowball. All will want their votes hand-counted with results posted immediately. The corrupt officials and corporations can keep their multi-million dollar contracts and shiny new crapass machinery for the time being--and use the machines merely for double-checking the handcounts, and for reporting/storing data. But these vile corporations will lose their SECRET control of the totals. This strategy also SEPARATES the interests of election officials from the election theft industry, and gives the election officials a way to save face.

I like to think that this strategy is as elegant as TIA's proofs of election theft, but that may be way too presumptuous of me. It is built of two factors--the voter REBELLION against the machines, plainly evident in the midterms, and the need to get around the BIPARTISAN corruption wrought by that $3.9 billion HAVA e-voting boondoggle, which has been the main obstacle to reform.

Also, two weeks before the election, I saw several of the war profiteering corporation news monopolies publish nearly identical "news" articles about the big Absentee Ballot vote. They all said it was voters "choosing convenience." They didn't mention that voters might DISTRUST THE MACHINES, not even as a possibility! I think this is a major clue to how potent this AB voter rebellion could be, if we in the election reform movement will only realize what it WAS. It was a PROTEST!

The election reform movement has been awesome at raising consciousness about this issue, but has so far lacked a strategy for achieving transparent vote counts. We need A strategy--a practical, on the ground means of achieving our goal--and we need a backup strategy for the very possible failure of Congress to produce real reform in time for '08 (--or their making things worse!).

This is the ONLY strategy that I have seen proposed. We can gather and analyze data, we can publish articles and books, and create web sites, and do exposes, and file lawsuits and lobby Congress. But unless we have a LOCAL strategy, I think we will fail. Time and again, election reform groups and activists approach local/state officials with objections to the electronic voting system, and time and again provide evidence that this system is a disaster, and the election officials close ranks and advocate for the corporate vendors. We have to break that lockstep. We need a strategy toward that end. I think this strategy has a lot going for it. I welcome others. But please don't tell me how you are going to convince Diane Feinstein to clean up this, the worst of all corporate corruptions of our democracy.
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 02:55 PM
Response to Reply #13
16. I do like your theory.
I was wondering about several possible problems.
1. How secure is the absentee ballot system?
2. Similarly, how sevure are the oversees and military ballots?

How are they presently secured? What improvements could be made to increase security for them?
Printer Friendly | Permalink |  | Top
 
Peace Patriot Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 06:02 PM
Response to Reply #16
35. No. 1: Most of the Absentee Ballots are scanned right into the rigged
electronic system. There is rarely a handcount BEFORE the scanning. AB voters have a very good argument that their votes should be hand-counted and posted BEFORE any electronics are used. That's what they want. That's why they voted by AB. And AB voters have the NUMBERS to get this done.

No. 2: There are jurisdictions where the AB votes are mishandled, for instance, shoved aside for later "counting," and NOT included in the 1% audit (if there is any audit at all) and/or with the e-voting totals announced first (which makes it harder for candidates to challenge suspicious results--since the e-vote is the one that generally produces the wrong winner). All such mishandlings of AB votes need to be halted. And AB vote handling/counting needs to be closely watched. Secrecy should not be tolerated. (The other part that needs watching is the disqualification of Absentee Ballots, dues to signature problems, or nitpicky things.)

No. 3: Insuring that your vote gets there. Oregon has been using an all mail-in voting for some time. They have a system in place whereby you can call a phone number and confirm that your ballot was received. The alternative, outside of Oregon, is to mail your ballot certified/return receipt requested. Trusting USPS ordinary mail is an issue. Some states permit you to deliver your AB vote by hand to the polling place on election day (or in early voting). Hand delivery is probably best. 30 states have AB voting--some with more restrictions than others. We need to expand this right--and an AB voting movement and AB voting pressure groups could help with this.

No. 4: There is also an issue with centralization and the ability to WATCH the vote counting. But you can't "watch" the vote counting in precincts any more either. Ideally, precincts are where the votes should be counted with results posted in the precinct before the ballots travel anywhere, and before any electronics are used. And, ideally, local citizens watch other local citizens count the ballots, post the results, and carry the ballots securely to a central location for double-checks, and entry into machines for checking the hand count and storage/reporting. Centralization (inherent in AB voting) is a problem that can be overcome with good organization--having groups organized to go to the county seat and watch. It is an easier problem to solve that the fraud that is inherent in secretly coded electronic voting and central tabulation. And we are dealing with a CRISIS. We likely cannot get a return to a full paper ballot/precinct system for '08. Too much corrupt obstructionism tied to the multi-million dollar e-voting contracts. We have to get AROUND this corruption. Going with an Absentee Ballot strategy, temporarily, is a way to do that.

No. 5: AB voting is your only guarantee of voting in case of machine failure or other interference with the election. It is certainly more secure than touchscreen voting. Currently, it's about as secure as an optiscan vote (not very). But AB voters have a particular agenda and grievance. They are trying to get back to the old handcount system. They don't trust the machines. So this group of voters--and it is very large--have the motivation, and clout, to get back to a PARTIAL handcount system, that will likely snowball.
Printer Friendly | Permalink |  | Top
 
stellanoir Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:00 PM
Response to Original message
17. K&R'd
And TIA dear if you're lurking, I hope you and Mrs. ALL had a delightful day yesterday and that you're feeling well and taking good care of yourself.

best wishes as always and thanks
Printer Friendly | Permalink |  | Top
 
bridgit Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:09 PM
Response to Original message
18. amazing, k&r'd...
:kick:
Printer Friendly | Permalink |  | Top
 
stillcool Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:12 PM
Response to Original message
19. Fault-finders find fault...
even in paradise.
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:45 PM
Response to Reply #19
23. My ex-husband was like that.
That is why he is my X.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 05:04 PM
Response to Reply #19
29. If it weren't for fault-finders
the world would be a more dangerous place.
Printer Friendly | Permalink |  | Top
 
stillcool Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 02:21 PM
Response to Reply #29
74. hey...whatever you seek....
Edited on Sat Nov-25-06 02:21 PM by stillcool47
is what you will find. I do not see the benefit in aggressively casting doubt on another person's compilation of data...unless you produce your own. There is no value in empty assertions.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:33 PM
Response to Reply #74
77. I think if you look around
you will find that Febble has produced quite a bit of information.
Printer Friendly | Permalink |  | Top
 
stillcool Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 06:35 PM
Response to Reply #77
87. no luck so far...
are you referring to Kos? Or, perhaps you could provide a link? I've searched and have only come up with more of the same. He does seem to hold himself in high regard, but ignorant as I am, I don't know why.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 06:38 PM
Response to Reply #87
88. no, I'm referring to Febble
(Why would I be referring to Kos?)

One somewhat scattershot approach would be to consult her journal -- or her diaries on DKos.
Printer Friendly | Permalink |  | Top
 
stillcool Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 07:00 PM
Response to Reply #88
89. Some of Febble's posts referred to Kos...
Edited on Sat Nov-25-06 07:10 PM by stillcool47
And my search of DU did not provide much different information. I did look at her diary...thank you for the suggestion.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 08:17 PM
Response to Reply #89
91. well, I'm at a loss
Febble is one of several (including Skinner) who have provided perfectly reasonable substantive criticisms of TIA. They aren't obligated to produce their own "compilations." That would be something like saying that I can't criticize an attempt to use carbon dating to prove a 6000-year-old Earth unless I do my own carbon dating. Sorry if I'm missing your point.

But as Febble's record on this board attests, she has put more time into quantitative analysis of election irregularities than, well, possibly any of her critics. She hardly fits the profile of a do-nothing knee-jerk naysayer. (Not to say that that was your point either. I don't know what your point was.)
Printer Friendly | Permalink |  | Top
 
stillcool Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 10:01 PM
Response to Reply #91
93. I don't think I have a point either...
Edited on Sat Nov-25-06 10:15 PM by stillcool47
only that there are several posters that seem to have a vested interest in negating TIA's analysis. It would seem to me, that those that have a point to prove would link the appropriate material to prove that point. As it is, the only point proven to me, is that polling is subjective...I do believe, that TIA believes in his analysis, and I applaud his commitment, his knowledge, and his end product. For every believer there is a non-believer, be it Skinner, Kos, or DU as a whole....which is fine by me. The kicker is the tenacity and veracity of the naysayers...as though vociferous equates validity. The tone alone gets my back up...but...what ever.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 07:57 AM
Response to Reply #93
103. OK, let's compare notes
TIA was banned for (inter alia) repeatedly trashing DUers as freepers, and several of us were among the folks trashed. I've been trying to explain TIA's mistakes to him for a year and a half now, but others started even sooner. That aside, speaking for myself, I fully expected to relive every bad fraud argument of 2004 if the Democrats didn't win Congress, but I guess I couldn't admit to myself that we would relive them all even if the Democrats did win Congress. (Since you don't know me, let me say: I don't think every fraud argument is bad. But I think most of TIA's arguments are bad -- not only the fraud arguments, by the way.)

At the end of the day, my "vested interest in negating TIA's analysis" is that I think the analysis is flawed, and it is harmful for people to go around brandishing faulty analyses. I don't just mean that it is harmful for people to be wrong -- that's the human condition. But TIA is loudly, often abusively, wildly, inexorably wrong. It's not good.

"As it is, the only point proven to me, is that polling is subjective..."

TIA's calculations depend upon polling not being subjective. That, in a nutshell, is why so many smart people feel comfortable criticizing his analyses without bothering to link. (I don't think that polling is totally subjective, but distinctly.)

"I do believe, that TIA believes in his analysis,"

I agree.

"and I applaud his commitment, his knowledge, and his end product."

His commitment would be great if he could learn from critics. Febble has been extraordinarily patient in multiple venues, and he regularly insults her -- but the insults hurt him, because they distract him from what he ought to be learning. His knowledge is extraordinarily selective, and his end product is very poor, for reasons that many of us actually have stated.

Obviously my point isn't to defend every word in every post by every critic, including myself. But if you review the history -- and study the arguments closely -- I think the tenacity makes more sense.
Printer Friendly | Permalink |  | Top
 
fooj Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:29 PM
Response to Original message
21. TIA gave me hope after the 2004 "selection"...
And he STILL does. K/R!
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:43 PM
Response to Reply #21
22. Me too!
PS, that is a coo new avatar! I almost didn't recognize you without the blue daisy! The new one sparkles! :bounce:
Printer Friendly | Permalink |  | Top
 
katinmn Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 03:55 PM
Response to Original message
24. Thanks, TIA. Not sure why anyone still disputes e-voting fraud.
Even in 2006 the facts are there and even documented by a wide variety of media, from corp-owned to independent.
See http://www.votersunite.org/electionproblems.asp for a continuing updated list of nationwide election "shenanigans" with links to actual stories.

This problem ain't going away on its own.

Printer Friendly | Permalink |  | Top
 
Peace Patriot Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 05:03 PM
Response to Original message
28. I just want to point out one thing about the posters who always seem to
Edited on Fri Nov-24-06 05:06 PM by Peace Patriot
aim at debunking any theory or evidence of the theft of U.S. elections by means of electronic voting, and it is this: They fail to take the context into consideration that BUSHITE corporations have in fact gained the power to steal almost every election in the United States, with TRADE SECRET, PROPRIETARY programming code in the new electronic voting machines and central tabulators, which have been proven to be extremely insecure, unreliable and insider hackable.

Non-transparent elections are not elections. They are tyranny. And the burden of proof is ON THE TYRANTS, not on their victims.

Given this CONTEXT, fraud must be PRESUMED, and we should be LOOKING for it with whatever tools we have in this non-transparent, tyrannical situation. There is no other purpose to a non-transparent vote counting system BUT fraud. If it wasn't fraudulent, it would never have been made non-transparent.

This context cannot be ignored.

It's fine to nitpick any evidence or argument, on any subject--either to improve the quality of the evidence, or to debunk it--so long as the nitpickers have the same goal, of specific and overarching truth. But that is not what these kinds of posters do. They make the wrongful presumption that this egregiously non-transparent vote counting system is NEUTRAL, and that there is some objective criterion out there for what the vote IS. This wrongful presumption has a part 2: attempting to dismiss and debunk every effort to discern the pattern of fraud in this non-transparent electronic voting system.

The two main vendors of this non-transparent vote counting system are two related corporations, with close ties to each other, to the Bush regime and to far rightwing causes: Diebold and ES&S.

DIEBOLD: Until recently, headed by CEO Wally O'Dell, a Bush-Cheney campaign chair and major fundraiser (a Bush "Pioneer" right up there with Ken Lay), who promised in writing to "deliver Ohio's electoral vote to the Bush-Cheney in 2004"; and

ES&S: A spinoff of Diebold (similar computer architecture), initially funded by rightwing billionaire Howard Ahmanson, who also have one million dollars to the extremist 'christian' Chalcedon Foundation, which touts the death penalty for homosexuals (among other things). Diebold and ES&S have a further incestuous relationship--they are run by two brothers--Bob and Tod Urosevich.

These are the people who "counted" 80% of the nation's vote in 2004 under a veil of corporate secrecy--and their share of the election theft industry has only grown since. The initial coup--the so-called "Help American Vote Act" of Oct. 2002--was engineered by the biggest crooks in the Anthrax Congress, Tom Delay and Bob Ney, with the complicity of corporate 'Democrats' like Christopher Dodd, who provided a $3.9 billion electronic voting boondoggle to fast-track these non-transparent voting systems, and their corporate culture of secrecy all over the country.

And these are the people that some posters at DU are defending when they seek to cast doubt on an honest researcher and analyst--who is trying to pinpoint the nefarious goals and achievements of these private, FAR RIGHTWING corporations. They NEVER CAST DOUBT ON DIEBOLD AND ES&S. They save all their nitpickings and all their doubts for TIA.

-----

edit: typo.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 05:17 PM
Response to Reply #28
31. show me one poster
who "NEVER CAST(S) DOUBT ON DIEBOLD AND ES&S."

Show us one. I'm tired of the factually challenged, vaguely directed smears.
Printer Friendly | Permalink |  | Top
 
Name removed Donating Member (0 posts) Send PM | Profile | Ignore Fri Nov-24-06 05:53 PM
Response to Reply #31
32. Deleted message
Message removed by moderator. Click here to review the message board rules.
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 10:53 PM
Response to Reply #32
38. nope n/t
Printer Friendly | Permalink |  | Top
 
Mr_Spock Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 10:18 AM
Response to Reply #31
63. Hear! Hear!
It's one thing to cast doubt on Diebold and ES&S - I think almost all here do not trust the integrity & reliability of these machines & their crappy software/hardware. Using bad statistical assumptions to prove this point actually does MORE HARM than good IMHO.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 05:59 PM
Response to Reply #28
33. It is not nit-picking
to point out that a statistical argument is based on completely untenable assumptions, regardless of what else you might want us to consider.

I completely agree with you, as you know, that it is vital that your broken election system is reformed, and that means replacing your current voting machinery with something that is transparent, reliable, and secure, and ending the appalling voter suppression tactics that we saw yet again in 2006.

But none of that makes bad statistical arguments any better. You cannot infer fraud from TIA's analyses, because analyses based on untenable assumptions are not valid. You might well infer fraud from something else, but I do not see any point in offering flawed statistical arguments as hostages to fortune.

But even more importantly: I think that it is possible to set probable upper limits on the magnitude of vote-switching achieved in 2004 (no matter what the incentive), and I did so. I concluded that it was unlikely to have occurred on the scale inferred by some from the 2004 exit polls, and that therefore it was worth doing all you guys could to win in 2006.

And you did. That is fantastic. But, if you remember, PP, I was not very excited (and actually somewhat alarmed) by your AB campaign, because it seemed to be predicated on the assumption that you couldn't win anyway, given the scale of vote-switching in 2004 and thus the vote-switching capacity in 2006, and so you might as well Protest. Well, I was right and you were wrong - you could and did win in 2006, and fortunately it wasn't so close that a few uncounted absentee ballots made much difference.

So in the end the nit-picking didn't matter. But it might have done. If it had, I would have been bloody furious.
Printer Friendly | Permalink |  | Top
 
creeksneakers2 Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 06:01 PM
Response to Original message
34. I was the one who confused generic polls
Edited on Fri Nov-24-06 06:19 PM by creeksneakers2
with whatever you call the polls where they just ask which party the voter supports nationally. I was better informed after you responded on the previous thread. Thank you. I hope you didn't go to a great deal of trouble on this thread over me.

Its been years since I took a statistics course so I can't remember exactly how they stated the rule. But the measurement you use must be consistent with what is being measured. If you ask people which they would choose if if the choice was A or B, and when the actual choice occurs people are given a choice between A,B,or C, then the poll will not be valid.

Even when inserting the words "in your district" the ballot choice is not replicated because the candidates' names are not offered. For one thing, the poll leaves out the effect of simple name recognition, which is a powerful force in determining election results. Lots of voters go into the booth not sure who they will vote for and will see a name they've heard before and pull that lever.

According to my ex-professor,who is nationally recognized, a poll where the measurement doesn't match what is being measured is an invalid poll.
Printer Friendly | Permalink |  | Top
 
FogerRox Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 06:24 PM
Response to Original message
36. I never got past high school math.. so I won't touch the math here
Edited on Fri Nov-24-06 06:24 PM by FogerRox
But in General terms allow me to offer:

-The Idea that Congress sucks, but "my congress person is good", is valid. But the obvious counter in my mind is the overwhelming "throw the bums out". If I consider the election results to be true, with no fraud.... then it seems that the "Throw the Bums out" may have trumped the "my guy is good", in a variety of races where the incumbant might normally win. The throw the bums out factor also seems to rear its head in many races where the bum did not get thrown out, but the bum ended up in a tight race, where the bum had won easily or Bush got 60% or higher in that CD, in '04.

-Then there are the vote totals, I forget where I read this here at DU, but IIRC it was something like 25 million Repub votes Vs. 36 million DEM votes. Should this reinforce the Throw the Bums out factor? 25mm to 36mm is about a 40-60 split right? Mirroring IIRC, a last minute CNN poll on the generic DEM being up by 20%. Well 20% of the House is 87 seats. ANd we didn't even get half of that=43 seats, we got about 35% of that, 30 seats.

-Was it about 3-4 weeks out that Rahm Emanuel said there were 58 house races that were winnable?

-There seemed to be some indicators that '06 would be bigger than the '94 re-alignment, the Repubs won what.. 45 seats in '94. It seems we ended up about 15 seats shy of that.

Kn R for TIA. Irregardless of how "Perfect" your work is or isn't, your tireless work to, in your own way, shed light on this issue is admirable.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Nov-24-06 11:07 PM
Response to Reply #36
39. the bums...
I definitely think that "throw the bums out" trumped "my member is good" in some cases. The clearest example from what I know is Lincoln Chafee in the RI Senate, but there must've been some in the House, too. However, I don't see how that would account for Republican incumbents doing worse in the election than in the generic ballot. Of course there could be other angles to the generic ballot, too. I just think it's kind of a mess.

I don't know what vote totals you are citing, but they aren't for the House. The Democratic lead in counted votes was somewhere in single percentage figures. TIA's main point, as far as I can tell, is that the vote count doesn't come anywhere near matching the CNN poll, or even the pre-election average with polls like the CNN poll included. Fair enough, although I don't think it supports his conclusions.
Printer Friendly | Permalink |  | Top
 
NobleCynic Donating Member (991 posts) Send PM | Profile | Ignore Fri Nov-24-06 09:26 PM
Response to Original message
37. Generic polls will always be different, sometimes dramatically so
from the actual vote because of name recognition. The assumption in your statistical model that there would be no difference introduces a potentially massive source of error.

Mind you, some of agree that there was fraud in the election. Some of us just realize that this is overstating the fraud by cherry picking statistics. The problem with being the opposition is your work and your numbers must always be held to higher standard then those who support the status quo. While your statement that the election was flawed is in all likelihood correct, your proof is flawed. I'd consider it circumstantial evidence at best.

Mind you, if someone used the same methods to claim Dem's stole the election and posted it on FR, you'd see it on Fox News in days. They have no problem with presenting bad statistics. But here, you will find those who take issue even when it tells us what we want to hear.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 04:45 AM
Response to Reply #37
47. Oh, well said.
Yes. Progressives should and do pride themselves on being reality-based.
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 12:42 AM
Response to Original message
40. Kick.nt
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 02:04 AM
Response to Original message
41. The machines are counting votes in secret,
to the people who are trying to discredit TIA'S work, please join us in getting America to STOP counting the vote's on secret vote counting machines, I think TIA is trying to give people a reason to believe that Secret vote counting can not be trusted, the people who are trying to discredit TIA's work and also know that counting votes in secret is TOTALLY WRONG, do a disservice to the American people. By trying to discredit TIA'S work, when all he is trying to do is tell the American people, that the numbers DO NOT ADD UP when the vote is counted in secret.

If you know that counting votes in secret is wrong, WHY would you try to discredit TIA's work.

Help me out here cause I just don't get it.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 03:34 AM
Response to Reply #41
42. Well this is an honest request
Edited on Sat Nov-25-06 03:43 AM by Febble
and deserves an answer.

And the answer is simple: because I think that the best way of stopping the your votes being counted in secret is to make good arguments against it.

I think there are excellent arguments:

  1. The machines are unreliable (a great deal of evidence)
  2. The machines are hackable (a great deal of evidence)
  3. People try to steal elections (a great deal of evidence)
  4. Even if secret vote counting was completely reliable and secure (it isn't) it would still be undemocratic, because vote counting needs not only to be reliable, but to be seen to be reliable, if the Consent of the Governed is to be granted. Transparency is a fundamental issue.

But there is a great reluctance in some quarters to get rid of the machines, for a variety of motives, some good, some self-interested, some frankly evil. And presenting a straw man, ready made for those who would reject the arguments for getting rid of the machines, is a foolish strategy. And TIA's argument here is a straw man. His inference is not supported by his data. The pre-election polls do not suggest that millions of votes were stolen in 2006, any more than the exit polls suggest that millions of votes were stolen in 2004. To argue that they do, is to invite the issue of unreliable, non-transparent, insecure, unauditable voting technology to be dismissed as the obsession of cranks.

And it isn't. There are irrefutable arguments for stopping those machines. That's why I don't support refutable arguments.

edited subject header for clarity
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 04:13 AM
Response to Reply #42
43. Knowing all this, TIA's numbers should be small potatoes to you
"1)The machines are unreliable (a great deal of evidence)

2)The machines are hackable (a great deal of evidence)

3)People try to steal elections (a great deal of evidence)

4)Even if secret vote counting was completely reliable and secure (it isn't) it would still be undemocratic, because vote counting needs not only to be reliable, but to be seen to be reliable, if the Consent of the Governed is to be granted. Transparency is a fundamental issue".

If you were trying to rid our country of ANY AND ALL machines that count our votes in secret, TIA's numbers would not be important to you, but they seem to be, thats what I don't get, TIA's numbers could be 100% correct or 100% wrong, you seem to think they are mostly wrong, how could anyone who knows and understands that the votes are indeed being counted in secret, argue against TIA's numbers.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 04:26 AM
Response to Reply #43
44. Well, I don't think
Edited on Sat Nov-25-06 04:27 AM by Febble
that the evidence supports TIA's numbers.

kster - as you know, I agree with you that secret vote-counting has to stop, for all the reasons I gave. But those reasons are not proof of TIA's numbers. However, that makes no difference to the argument. If only a few thousand, or a few hundred thousand, or even a few hundred, votes were stolen on the machines that is argument enough for reform. Actually, I'd say the fact that we don't even KNOW how many votes were stolen is argument enough for reform. Even if NO votes were stolen, you'd still have an irrefutable argument for getting rid of the machines.

But I think the numbers of stolen votes, in both 2004 and 2006 is extremely unlikely to be in the millions, and I infer this, in fact, from the 2004 exit poll data I was privileged to have the job of analysing. This was why I was confident that you guys could win in 2006. I didn't share your view of the mountain of fraud you'd have to climb.

And you did win. There was still fraud; there is still risk of greater future fraud; there was still abominable voter-suppression, and the machines were idiotically unreliable. All this needs to change. But none of it means that the scale of vote theft was in the millions, either in 2004 or 2006, and arguing that it was seems asking to have the whole issue marginalized. The data do not support it. TIA's analysis is wrong.
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 04:35 AM
Response to Reply #44
45. How can you say TIA's numbers are wrrong
if you know and understand, (and I know you do) that the votes ere being counted in secret?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 04:43 AM
Response to Reply #45
46. Because
I have done the appropriate statistical analysis of the polling data.

Mostly TIA starts with the right numbers. He just makes the wrong inference, because he makes faulty assumptions. With statistics, the assumptions you make are crucial to the inference you can draw.

If you make the correct assumptions, you cannot draw the inference that TIA draws. Any dispassionate view of the pre-election polls leads to the inference that the results were in line with the polls. This is good news - it means that the system is not as broken as you think it is. It was functional enough to let you win. Now you've won, you can fix it.
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:00 AM
Response to Reply #46
48. America, No doubt it is, GOING TO BE FIXED, but
where do you get you numbers from? The votes are being counted in secret, I don't think TIA denies that fact, but you seem sure that TIA is wrong, so where do you get your numbers from?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:07 AM
Response to Reply #48
49. Well, generally from the same source as TIA
Edited on Sat Nov-25-06 05:07 AM by Febble
in fact, if you read my post #27, I use TIA's numbers. But using what I consider a more valid set of assumptions I come to a different inference.

His starting numbers (his data) aren't necessarily wrong. Nor are his actual calculations. It's the assumptions behind his calculations that are wrong.

In this case, demonstrably so. His own data (the 10 most recent pre-election polls) show clearly that at least some of the polls were biased. They can't not be. But TIA ignores this when calculating his MoE. So his probility calculations are completely wrong.

edited for typo
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:20 AM
Response to Reply #49
51. The votes are being counted in secret
but TIA's numbers may or may not be wrong? I will never understand that. You KNOW, the votes are being counted in secret, but still you keep spewing the same stuff.

I hope you feel happy, trying to convince poeple that you are right and TIA is wrong, EVEN THOUGH YOU KNOW, THAT THE VOTES ARE BEING COUNTED IN SECRET.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:34 AM
Response to Reply #51
54. So you are saying:
We know a crime could have been committed. Therefore, someone who tells me that it was committed on a huge scale must be right?

That makes no sense to me. Does it really make sense to you?
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 12:26 PM
Response to Reply #54
68. Yes, in this particular case, because you know for a
fact that the votes are being counted in secret. Once you know that fact, the debate about the numbers should be over, and the people who are able to do these numbers should join forces, until the secret vote counting is stopped. Thats just the way I see it.
Printer Friendly | Permalink |  | Top
 
Ms. Toad Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 03:08 PM
Response to Reply #68
76. But when you have someone yelling that
the odds that the election was fair are 76 billion to 1 in order to get people's attention, and that number turns out to be easily discredited by anyone with a decent understanding of statistics and/or polling, the people whose attention you are trying to get stop listening.

When someone is trying to convince me to their side on a particular issue, and I find out that something they have told me is not supportable (particularly when it happens over and over again) I tend to ignore everything else they tell me because I can't trust them to be careful with the truth.

I don't want people to be drawn into working for fair elections by cries of "the sky is falling" only to be turned off when they find out that its just a bit of a branch that fell off the tree. Granted, the fact that the branch is falling may indicate the tree is dying - but by the time I get over being pissed at being convinced the sky is falling by numbers I don't really understand in the first place - but sounded good - I may not stick around long enough to figure out that the fact that the tree is dying is also a bad thing.

In the short run, the alarmist "sky is falling" cry may get more initial responses - in the long run, he "tree is dying" reality will generate a more sustainable drumbeat for change.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 07:59 AM
Response to Reply #51
56. I won't speak for Febble
Edited on Sat Nov-25-06 07:59 AM by OnTheOtherHand
but while "happy" doesn't characterize how I feel, I'm certainly not defensive that some of us actually care -- and think it matters -- whether TIA's arguments are right or wrong, while some of us don't.

Think about this some more. You don't need TIA to tell you that the votes are being counted in secret, so why do you care about this at all? I think Febble has actually spent more time on DU criticizing electronic voting than TIA has, so why do you criticize her? It makes no sense to me.

Maybe lousy arguments make the (ETA: election integrity) movement stronger, but I think we at least owe it to ourselves and the cause to debate the point.
Printer Friendly | Permalink |  | Top
 
Stevepol Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:20 AM
Response to Reply #46
52. So your assumptions are better than TIA's assumptions because ...?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:32 AM
Response to Reply #52
53. Well, I did try to explain
Edited on Sat Nov-25-06 05:44 AM by Febble
but here it is again: TIA ignores the evidence in his own data that each poll included "non-sampling" error. If the only error in all the polls was sampling error all the polls should have been very close together. Only one in twenty would be expected to have a result further from the mean than the 95% MoE of each poll. In fact the margins are all over the shop, and the "between poll variance" is far greater than the standard error of each poll. So the only way to determine whether the final count was within the error of the polls is to look at the standard deviation of the margins across the 10 polls. And the count (using TIA's estimate - I think the final margin will be larger) was well within two standard deviations of the margins in the polls.

So far from being a 1 to some-large-number-with-lots-of-zeros that the count should be different from the average of the polls, actually the probability is not even in the range that would be considered "statistically significant" by a social scientist.

And that's before we even start to consider that it was a generic poll, and that polls of specific House races were good predictors of the results, or that a host of other well-informed pundits pretty well nailed the final result.*

So to answer your question - my assumptions are better because they are supported by TIA's own data. His assumptions are refuted by his own data, as well as by a host of well-informed polling bloggers.


ETA: *oh, and before considering that we are talking about a finite population of polls, and we do not know whether the non-sampling error was normally distributed. The poll with the smallest margin could have been the most accurate; but just as easily so could the poll with the largest margin.

(and to insert a missing word, and de-mangle a sentence)
Printer Friendly | Permalink |  | Top
 
truedelphi Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 02:38 PM
Response to Reply #53
155. Could you explain this in a clearer fashion? A Poll
Assumption Examination/Explanation for Dummies in other words. Some definitions would be marvelous.

I am lost right at the beginning - What is a non-sampling error? I mean, a lot of us do not even know what sampling is - so taking it one step further is even more confusing. Waht is between poll variance?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Nov-30-06 06:55 PM
Response to Reply #155
161. OK, I'll have a go
but you might like to have a look at this as well:

http://www.dailykos.com/storyonly/2006/11/4/135126/905

When we want to study a population of something (voters, corn, bingo balls, goldfish) we can take a sample of them, and by studying the sample, we can infer what might be true of the population.

The "law of large numbers" says that if you take a large enough sample, then it doesn't matter how huge the population is, you will get a pretty good approximation in your sample to the proportions of whatever it is you are interested in in you population.

The law, however, depends on the sample being truly random. If it is truly random, then you can figure out how much "error" there is likely to be in your sample. For example, if you had a population of voters with 50% Republicans and 50% Democrats, and you sampled a thousand of them, on average you'd get around 500 Republicans and about 500 Democrats. However, you wouldn't get exactly that number each time - the samples would vary even though the proportions in the population stayed the same.

That variability between samples is what we call "sampling error" and it can be calculated quite precisely, in terms of probability, and is the basis of what pollsters call their "margin of error".

However, not all samples are truly random samples from the population you are interested in. You might, for example, get more women answering the telephone than men. And this might mean that your sample tended to systematically have a larger proportion of Democrats in it that the population you are interested in. The way this is usually phrased is that the sample has been sampled from a "different population" than the population you want to study - you've sampled from the population of telephone answerers, rather the population of voters. Or you might have sampled from the population of voters who thought they might vote, rather than from the population of those who eventually did vote.

This kind of "error" in polls is called "non-sampling error" - which just means any source of error in the poll that isn't "sampling error".


So: if you have several polls, each with a sample of about 1000 voters, you can easily figure out whether there was "non-sampling" error in the polls. If the only error was "sampling error", all the polls should be pretty close together - all the variability between polls would be the variability we'd expect simply because different random samples had been drawn from the same population of voters. In other words the "between-poll variance (the amount the poll vary from each other) should be small, and we can calculate how small.

But in fact, the "between-poll variance" of these ten polls is much larger than we would expect if there was sampling error alone. So we can infer that there must have been "non-sampling error" as well. In other words, not all the polls were samples from the "same population".

And the trouble then is - we don't know which ones drew from populations closest to the population of voters who actually voted. There is no particular reason to suppose that the average is correct. We just know that some of them were wrong - biased. The most accurate could be the ones near the middle, but it is also just as possible that the most accurate were the ones near the edge. If bias tends to be in one direction and just varies in magnitude, then the most accurate polls might tend to be at one end of the distribution of polls.

So to try to explain assumption:

An assumption is something that your inference depends on. For example, if we assume that the only error in the poll is sampling error, we might infer that there was fraud. However, we can demonstrate that that assumption is false by looking at the between-poll variance, as I described above. There are many sources of non-sampling error in polls, and one pitfall is to assume that they were not important in your sample. For example, another assumption that some people have made is that voters report their previous vote (the way the voted at the election before the current one) correctly. Again, there is a lot of evidence that this is not the case - that a small but significant minority must tend to mis-report having voted for the previous winner (or they fail to remember having voted for the previous loser). Or that "response rates" are correctly reported (there is evidence in the exit poll data that this cannot always have been the case).

So when anyone makes a statistical inference, it is important that they state their assumptions (or that readers are aware of the assumptions that have been made) because the confidence you can have in the inference depends on the confidence you have that those assumptions are good. And in some cases the data itself tells you that the assumptions are false.

I hope this helps. Sorry, I have to go to bed right now. Let me know if you have any questions.

Cheers

Lizzie
Printer Friendly | Permalink |  | Top
 
truedelphi Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-01-06 12:31 PM
Response to Reply #161
163. Ah your first four paragraphs help a lot
So the variance in samples refers to samples taken within the same poll - I was totally confused thinking they were samples taken and compared amongst several different polls - which to me did not even make sense to do (In other words, say in '04 you found 50% of voters in Akron OH voting for Bush, 50% for Kerry, then in Chicago you had 39% of voters voting for Bush, 60% for Kerry - I am just making these numbers up - but I assume Chicago would never be 50% split between Dems and Republicans for many reasons) I could not for the life of me figure out why you would retake the polls of these two places and then need to use sample variances of comparing the two DIFFERENT cities' polls to decide something.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-01-06 01:13 PM
Response to Reply #163
164. Just checking to make sure I have not in fact muddled you....
The variance in samples refers to samples (polls) taken from the same population (maybe that's what you meant). Each generic poll will take a different sample from the population of potential voters, but all the polls are supposed to be sampled from that same population.

So if all the polls were true random samples from the same population of potential voters, there should be very little variance between the polls, given the large sample sizes.

But in fact, there is a substantial amount of variance between the polls ("between-poll variance"), and that is what tells us that the samples weren't from the same population, but from slightly different populations. This frequently happens in surveys, which is why we have the odd term "non-sampling error" to cover it, although the sources of non-sampling error are extremely diverse.

Printer Friendly | Permalink |  | Top
 
truedelphi Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-01-06 01:28 PM
Response to Reply #164
165. Uh Oh If you had muddled me before I did not even know it
Now I am truly confused
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-01-06 01:57 PM
Response to Reply #165
166. Well, tell me the problem
and I'll try to sort it. PM if you want. Sorry to have confused you.

Lizzie
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 06:29 AM
Response to Reply #52
55. PS: FWIW
Edited on Sat Nov-25-06 06:31 AM by Febble
I am not a statistician, and nor is TIA. However, statistical computations involved here are not particularly difficult, and can be done easily in Excel. TIA is a master of Excel, but I am no slouch. More complex statistical computatations require specialist packages. Sometimes they have to be purpose written (and I have written a few myself).

But the point is that the essence of inferential statistics is the testing of a null hypothesis. The "art" of statistics is not the mathematics of the test, but the devising of an appropriate "null". This is where the relevant experience is not necessarily in mathematics, or even, actually in statistics, although statistical competence is important, but in the nature of the data you are analysing.

My own expertise is with social science data. People with expertise in fields that deal with these kinds of data (social psychology; sociology; political science; public opinion research) have tended to draw rather different inferences from polling data than people with expertise in, to take a not quite random sample: engineering; finance; law; organisational dynamics; computer science, even though the latter group may have good quantitative skills.

This is not because there is a conspiracy of social scientists to validate the Bush administration. I don't know about the US, but in the UK, social scientists are a notorious bunch of lefties. Presumably because the more one studies social science, the more aware one becomes of social injustice.

It is because social science data has characteristics that need to be considered carefully when constructing null hypotheses for the purposes of statistical inference.

So, while I rarely appeal to credentials (and very rarely to my own), preferring to rely on the validity of my reasoning, I will say: if the inferences from survey data made by social scientists differ from the inferences made by non-social scientists, then there is only a 1 in a-fairly-large-number probability that the non-social scientists are nearer the mark.

And I'd point out that it was social scientists who demonstrated, twice over, that Gore won 2000.

Edited to correct misleading sentence!
Printer Friendly | Permalink |  | Top
 
mogster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:12 AM
Response to Original message
50. Kick!
:kick:
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 09:18 AM
Response to Original message
57. My frustration with defense of pollsters...
There will NEVER be convincing evidence to satisfy defenders of polls used as evidence of manipulation unless 100% of the voters respond to the poll and the responders don't lie and etc., etc. The pollster position du jour is..."we don't look and don't tell" because that's someone else's problem.

Actually, it makes sense for a completely external check on a process to include sampling in polls.
Here's my frustration with poll evidence and pollster arguments to me...

1.) In most cases, there may be useful evidence (precinct level data) that is not shared by pollsters. The "privacy" rationale doesn't seem adequate, even to many social scientists who work with private data all the time.
2.) Why get on TV and "call the election", but then later say, "we don't really stand by the call because of sampling errors"? It's like a weather forecast that is always "wrong" and won't defend the reasons! "Since it rained, we've adjusted yesterday's forecast to say that it will rain!" This is false advertising for many folks.
3.) Pollsters have year after year of methodological excuses (2000, 2004, 2006) that don't result in clear fixes (or even method changes) for reluctant voters, gender-biased interviewers, etc. At least the effort to make it better should be evident and announced and documented. What will be different next time? The only change is to become more secret and hide in undisclosed locations! Hmmm...sort of reminds me of some of our elected folks.
4.) As some have said, TIA may be "wrong" in the assumptions of poll representativeness, but TIA may also be right! As long as data is hidden, adjusted, and counted in secret, we don't know and TIA has the right to argue that samples might be representative - and challenge pollsters (pre or exit) to defend their process. Peer reviewed journals who meet "statistical assumptions" and test for "significance" are often demonstrated wrong later in history and challenges are part of the game.
5.) Pre-election and exit polls seem to "agree" and both differ with the election results even when created under different circumstances. This needs an explanation. Replicability is as powerful as "statistical assumptions of normality"!
6.) As EDA and TIA have pointed out, there are specific questions and demographics on the polls that lend evidence to the poll's "validity" as opposed to complains of voting problems that are evidence of "lack of validity" of voter intent. Again, the pollsters (who put the questions on the poll) need to address this discrepency directly.
7.) Why don't pollsters target suspected districts and races to get a good picture of what is happening there? It's not like we can't predict some places for extra data to be useful. God only knows why there is $40,000,000 spent on election advertising in Sarasota, Florida and NOT ONE single national, comprehensive pollster is asking everyone in town how they voted and why! After the last 3 elections, it doesn't make any sense at all to be guessing and asking people to call in with problems. Seems like the pollsters don't want to know!
8.) We can debate "typical" uses of "significance" forever (Clarification: On Statistical Testing Carl J. Huberty Educational Researcher, Vol. 17, No. 1 (Jan. - Feb., 1988), p. 24) and there will never be a magic number. The same thing for the "normality" of the data or "robustness" of the conclusion if there is a violation of normality...

See know evil, hear no evil, speak no evil!

BTW, Mark Twain attributed his quote to Benjamin Disraeli, but there are a number of historically similar quotes...
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 10:24 AM
Response to Reply #57
64. yawn
As long as you insist on the frame of "defense of polls," you aren't likely to be able to learn much, which may be why you don't seem to learn much. Good luck with that.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 11:43 AM
Response to Reply #64
66. ah, let me point out a few of the problems
0) In this thread we're talking about generic polls whose results range from Dem +4 to Dem +20. To portray TIA's critics as defending polls makes no sense.

"1.) In most cases, there may be useful evidence (precinct level data) that is not shared by pollsters."

Well, someone has to make that case. Sancho hasn't tried to explain why, specifically, the precinct level data would be useful. There's no point in arguing about this again, because it has nothing to do with the thread anyway.

"2.) Why get on TV and 'call the election', but then later say, 'we don't really stand by the call because of sampling errors'?"

This has nothing to do with the thread either. I assume Sancho is thinking of the first blown call in Florida 2000, and we know that call was based on vote counts as well as interview data.

"3.) Pollsters have year after year of methodological excuses (2000, 2004, 2006) that don't result in clear fixes (or even method changes) for reluctant voters, gender-biased interviewers, etc. At least the effort to make it better should be evident and announced and documented...."

Also has nothing to do with the thread. It's just a wish list. It doesn't really matter whether anyone else agrees with the wish list; it's simply off-topic.

"4.) As some have said, TIA may be 'wrong' in the assumptions of poll representativeness, but TIA may also be right! As long as data is hidden, adjusted, and counted in secret, we don't know and TIA has the right to argue that samples might be representative...."

TIA has the right to argue whatever he wants, although he isn't allowed to do it here because he was banned from DU after repeated warnings. However, there is no obvious reason for any knowledgeable observer to accept his arguments. If someone sees a reason, he or she should present it.

"5.) Pre-election and exit polls seem to 'agree' and both differ with the election results even when created under different circumstances."

Evidence? Most of the 2004 pre-election polls showed Bush ahead. The state-level polls don't match either. In 2006, the generic pre-election polls were all over the place, and the race-by-race polls in the aggregate are close to the result.

"6.) As EDA and TIA have pointed out, there are specific questions and demographics on the polls that lend evidence to the poll's 'validity'..."

Name one, and provide specifics.

"7.) Why don't pollsters target suspected districts and races to get a good picture of what is happening there?"

And a pony! More off-topic wish-listing.

"8.) We can debate 'typical' uses of 'significance' forever...."

Well, in the long run we will all be dead. In the meantime, does anyone have anything they wanted to say in defense of TIA's arguments, or should we move on?
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 01:41 PM
Response to Reply #66
71. you resort to rhetoric...
1.) Precinct level data would be very useful to generalize the samples in some polls, and compare to other precinct and district data from other election information. Exactly some of the criticism of TIA.

2.) Pre and Exit polls are generated and published by media...who relish in announcing the results. That means they should relish alternative analysis of the data published and be responsible for accuracy.

3.) Don't criticize TIA's beween poll error (or WPE or anything else) when the pollsters can control this issue, but don't want to...at least they don't appear to try.

4.) Banning from DU is off the thread (as you like to say)

5.) You are now guilty of "selecting" the data to suit your argument - as R.A. Fisher in the song I posted for you. TIA and EDA and others clearly state the polls used...and TIA tends to use everything available. We all know about the pre-election polls, trends, and predictions.

6.) Where have you been...EDA and others name the questions and details in their reports.

7.) You don't have to like the question, just answer it! Why don't pollsters investigate and focus on the interesting races and districts? They must not be inerested!

8.) If you want to be critical, then defend your argument: if TIA doesn't meet "statistical assumptions", then how do you know? How do you know that TIA's analysis is not "robust" in terms of the "assumptions"? You are guessing just as you accuse TIA of starting with faulty assumption. I've seen little real evidence of either because of number 1 and 6 above!

As I stated to start with, there is NO amount of evidence that would convince you if you intend to defend pollsters or attack TIA..."those convinced against their will are of the same opinion still"

I'm still open to the possibility that TIA may not have the "perfect" formula or exactly correct "probability", but there is certainly some interesting merit to this latest argument that awaits discussion.

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:53 PM
Response to Reply #71
81. ?
1) Very vague.

2) And a pony.

3) If TIA offers unwarranted claims, of course I will criticize him, and so should you. TIA apparently feels comfortable making the claims without further data, so complaining about the pollsters seems like an evasive maneuver.

4) The point is, it's pointless to say that TIA "might" be right.

5) No, if you want to present evidence, you present it. Don't invoke specious authority.

6) Where have you been? Folks have knocked down the EDA report in multiple threads on DU and Daily Kos -- I don't know that anyone in the real world has bothered. If you think you can salvage the argument, then go for it.

7) More artless diversion. TIA says the polls prove fraud; you apparently think it's the pollsters' fault that they don't. Or maybe you think they prove fraud and it's the pollsters' fault that they don't. Whatever. (And a pony.)

8) I've responded to TIA's arguments here more often than any banned poster has a right to expect. If you think you can make his arguments better than he can, please do.

I haven't set out to "defend pollsters" or to "attack TIA." I thought we were discussing purported poll-based evidence of election fraud. Or something.
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 11:59 AM
Response to Reply #64
67. It hard to "learn much"...
from those who defend secret and selected evidence, but attack others and state they know more than others...

I read many of the links suggested by febble...and I don't have any problems following the statistics and discussions of WPE, etc. I believe (as Howard Wainer points out in many statistical journals), that manipulation of numbers with fancy techniques doesn't substitute for doing the job well to start with...and criticism answered with "we know what we are doing" is not good enough. Perhaps you need to listen and learn a little, also...sorry if you're bored.

With respect to our friends across the pond, here's the problem:

For, Fisher can always allow for it.
All formulae bend to his will.
He'll turn to his staff and say, "Now for it!"
Put the whole blinking lot through the mill."

Then Wishan and Irwin and Otelling.
And Florrie and Dunkley as well.
Their breast with modest pride swelling.
Said, "Shall we do likewise? We shall."

If Fisher can always allow for it.
Oh, why on earth, why shouldn't we?
And as he's bagged chi-squared we'll bow to it,
and make up our own formulae!

Statistics, you see is a wondrous cult
For a non-mathematical mind.
Which wants but the final, or end result-
As to how it's attained is quite blind.

Benard Keen
"sung to Wrap Me Up in My Tarpaulin Jacket"
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 01:40 PM
Response to Reply #67
70. you are ignoring substance
Edited on Sat Nov-25-06 01:50 PM by OnTheOtherHand
For instance, the OP is about pre-election polls, on which you have every opportunity to be as knowledgeable as any of us. Why not avail yourself of the opportunity?

EDIT TO ADD: Incidentally, I have no more access to data than you do, so as far as I'm concerned, your complaints are an empty diversion.
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 02:08 PM
Response to Reply #70
73. hmmm....I have started...
so, tell me:

Have you tested any of the pre (or exit polls) or election results for that matter for normality?

Given some common questions, have you tested that polls come from different populations?

My early analysis of precinct level data where available shows a common population so far...but mostly I've been looking at precincts of interest to me in Florida. If you want, I can start another thread with some Florida data already in SPSS or SYSTAT for those who are interested, but it would likely be pretty dull. I've also found some misfit in the categories in the questions in the 2004 E-M data...interesting, but that is not what you want.

The point is that you may be critical, but you don't really demonstrate that TIA is wrong any more than he demonstrates he is right based on unexplored/unavailable data...it's unknown. And there is no opportunity to obtain the "assumptions" that you say TIA are missing.

When I read articles and links referenced by feeble, I see some interesting things, and other things that do more manipulation than digging for the answers.

I don't want to debate power and effect size on DU, but that doesn't explain that there are lots of debates that can be settled. We would all like to see election officials do a better job, but one way to force the issue is to report if there is or is not a bunch of poll data that reveal an issue in a single disputed race or precinct or district that can't be explained in any way by "poll errors".

To do that, the pollsters (pre or exit) simply need to want to do it...and they don't want to. Are they chickens or false prophets or protecting clients or what? The answer that they don't want to know how people voted is simply not acceptable any more...and criticisms that follow from those polls that researchers don't meet assumptions (such as on this thread) are misleading if TIA or others do the best with the limited data they have...

IF TIA and EDA designed and advised the polling, I'd bet the quality of data would satisfy them one way or the other.

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 06:01 PM
Response to Reply #73
84. have I tested WHAT for normality?
Presumably you realize how incoherent that question is.

Presumably, if you are impressed by the 2006 pre-election generic ballot results, you have tested yourself whether those appear to come from a single population -- or maybe you didn't need a formal test to see that they probably aren't.

I have no idea what you would regard as a demonstration that "TIA is wrong." I think five different folks have pointed out giant gaps in his assumptions which you have made no attempt to rebut. I can't even tell why (or whether) you think TIA's work is interesting. So far, you have no arguments, only demands.
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 08:04 AM
Response to Reply #84
104. Again, more rhetoric than paying attention...
I didn't say TIA was correct...in fact, his probabilities are likely overstated...and I've said so before.

What I have advocated is that some the "assumptions" discussed are not testable given the data, and may not matter if the conclusion is robust to a violation of the mathematical assumption. That may be too technical for this thread. If you want to discuss specific objectivity, independence, sampling vs. sample distributions, levels of data quality, etc..we're getting a little difficult for the general audience. When Febble mentions within/between variance, that is ONE of the issues, but pollsters tend to play up what they are familiar with and ignore other things. Most of the "assumptions" could be addressed by good poll design and sampling plans and transparent data access. Some poll data is flawed to start with (like the assumption that likert scales are interval level). Some are collected in flawed ways to start with (like failure to sample the nonignorable nonrespondent).

It is not correct to criticize TIA for "not meeting assumptions" since there is no way to know if assumptions are met nor if it makes a difference in the conclusions that "polls don't match the election". That is a false assertion. Neither you nor TIA KNOW if all the "mathematical assumptions" are met!

There are other ways to skin the cat and Febble is correct that we may be well-served to look at things more visible and relevant.

Some of TIA (and EDA) discoveries are descriptively interesting and deserve investigation, regardless of the "magnitude" of the statistics.

Statisticians and engineers disagreed on test data that predicted the space shuttle would blow up in the cold because"it had not been empirically tested and there was not data that met mathematical rigor". Guess who was right!

Papers and pundits down here use the lack of "proof" in the polls (of manipulation) to AVOID fixing the election system, just like cigarette manufacturers used lack of "proof" of causes of cancer for decades due to "statistical significance". Regardless of the effect size, TIA and EDA force the issue to the surface.

I still think pollsters are irresponsible (unlikely) or incompetent (unlikely) or scared to piss off the paycheck (likely). There may be a combination of the three. Otherwise, we would see more action to address the issues.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 08:14 AM
Response to Reply #104
106. oh, c'mon
The generic poll results cited by TIA are mutually contradictory, end of game. Once the P value goes out the window, what is left? And he is still defending the P value.

If you want to make an argument of your own, go ahead.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 11:43 AM
Response to Reply #73
112. Little exercise for you, Sancho
Assume that the sample size for each of the 10 polls was around 1000 (as TIA does).

Calculate the probability that the each poll was drawn from the same population.

And when you've done that, tell me why you think TIA's assumption that they were is justified.
Printer Friendly | Permalink |  | Top
 
progressoid Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 10:00 AM
Response to Original message
59. But have you allowed for the CHEMTRAILS variant?
:yoiks: Hmmmm?
Printer Friendly | Permalink |  | Top
 
Name removed Donating Member (0 posts) Send PM | Profile | Ignore Sat Nov-25-06 10:10 AM
Response to Original message
61. Deleted message
Message removed by moderator. Click here to review the message board rules.
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 10:16 AM
Response to Reply #61
62. my fantasy:
that everyone who offered a content-free criticism of unnamed others' "agenda" would feel compelled to put up or shut up, as if facts mattered.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 02:04 PM
Response to Reply #62
72. It's funny
Some here are more equal than others. I guess they just can't take criticism.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 05:35 PM
Response to Reply #72
78. not only do I take criticism
but if it is thoughtful and substantive, I respond to it. Sometimes even if it isn't.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 11:29 PM
Response to Reply #78
97. Ok
Edited on Sat Nov-25-06 11:30 PM by BeFree
Are you telling us that we should accept what Diebold and ES&S have sold us?

Can you prove the results were not laden with miscounted votes?
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 04:54 PM
Response to Reply #97
142. Evidently not.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 05:01 PM
Response to Reply #142
143. Well, it should be pretty damn
evident from his posts that he isn't. Jeez, I wish some people would actually read.
Printer Friendly | Permalink |  | Top
 
mom cat Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-29-06 12:42 PM
Response to Reply #143
146. A lot of people who disagree with you can read quite well.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Nov-29-06 12:47 PM
Response to Reply #146
147. In that case,
why would they think that he was "telling us that we should accept what Diebold and ES&S have sold us?"
Printer Friendly | Permalink |  | Top
 
truedelphi Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 10:26 AM
Response to Original message
65. Keep up the good work n/t
Printer Friendly | Permalink |  | Top
 
Skinner ADMIN Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 01:23 PM
Response to Original message
69. I'll save myself the effort
and simply post a link to my previous post:

http://www.democraticunderground.com/discuss/duboard.php?az=show_mesg&forum=364&topic_id=2775205&mesg_id=2781808

Because my response is still correct, and TIA's central assumption is still wrong. He can try to dazzle people with a wall of words and numbers, but it will not matter. He cannot turn falsehood into truth simply because he wishes it to be so.
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 10:21 PM
Response to Reply #69
94. I don't think the analysis is an embarrassment,what is embarrassing
is honest people like Skinner and TIA and all the other honest people having this grand ol pie fight, while the people who are responsible for the pie fight ( the people on charge of the secret vote counting machines) are sitting back laughing at us, as we are distracted from the real problem, THE SECRET VOTE COUNTING MACHINES!
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Nov-25-06 11:26 PM
Response to Reply #94
96. And the further embarassment is
that while some make arguments, as TIA has, that try to prove their position that the counts produced by Diebold, ES&S et al, are not to be trusted, there is no way in hell the naysayers can produce one iota of proof that the counts are correct.

TIA's analysis can be picked apart because he has the balls to make an effort. And the embarassment comes when the naysayers produce nothing to support the Diebolds and ES&S's. Let them try to prove the counts were correct, that would only be fair.

Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 02:08 AM
Response to Reply #96
101. Yes, think Heller,Sancho,Funk and Shelly they went down, Why?
because all of these people, where about to strike at the HEART of the vote counting scam.

Yet we keep debating whether TIA's numbers are correct...........

Thanks BeFree.. :-)
Printer Friendly | Permalink |  | Top
 
Skinner ADMIN Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 08:34 AM
Response to Reply #96
109. Please do not lump me in with Diebod and ES&S.
I am on your side. But unlike some people, I happen to believe that our side needs to be making accurate arguments. If we make too many mistakes, then our entire effort can be dismissed.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 10:46 AM
Response to Reply #109
111. Indeed
Sorry for any inference, Sir.

We know which side you are on. The very existence of this forum is all the evidence any of us need to make that determination. I think I can speak for all the souls in ER when I say, Thank You. Your service to our democracy has been overwhelming. Your contributions to the cause rate as high as any other singular individual.

The idea that we must be as accurate as possible is good advice and, I think, well heeded. If TIA's theory holds water -- and common sense tells me it does -- then it will be proven to be accurate. On the other hand, it may be proven to be somewhat inaccurate and that too will come out in the wash.

This forum will keep on washing, thanks to you.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 01:39 PM
Response to Reply #96
120. Where to start....
You are of course quite right that TIA has the balls to make an effort. And anyone who makes that effort takes the risk of having their worked "picked apart" (as, in fact, contrary to your implication, I have had).

But you are quite wrong to imply that those who find fundamental errors in TIA's analysis are "naysayers" who are somehow asserting that "the counts are correct". It is possible for TIA to be wrong, AND for the counts to be wrong. I think TIA is wrong, and I think there is plenty of evidence that the counts are also be wrong. That does not make TIA correct, and it does not make it sensible to use TIA's arguments to advance the case that the voting machines are unreliable, insecure, and may have been used to steal votes. Because all Diebold and ES&S would have to do is point to the egregious errors in his logic and dismiss the charge.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 05:57 PM
Response to Reply #120
123. So don't....
Don't use it. I will, and you can just stop trying to keep me from using the info I want to express my common-sense knowledge that many millions of votes were stolen in 2006. There may be a few errors here and there in TIA's work, but in whole it makes good sense. I don't believe the numbers produced by Diebold et al, and I suggest that until you produce numbers that make us feel we CAN trust Diebold et al, that you'll get no belief from me that we can trust Diebold et al.

And yea, I have read your other analysis' and have debunked it up one side and down the other, aminly because the database is hidden from me. Besides that, it makes little sense. Here, TIA makes the data quite available and it makes sense. Thank you very much.

And I dare Diebold to argue their case in this forum, or any other. So what they say about TIA is of no consequence.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 06:25 PM
Response to Reply #123
124. I certainly
won't use it. I have no intention or ability to stop you using it, I simply point out the reaction you are likely to get if you do.

You need no more data than that provided by TIA himself to figure out that something is wrong with the polls.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 06:50 PM
Response to Reply #124
125. Shoot
There were probably more hands involved - in the open - to build the poll numbers, than there was - in secret - building the result numbers.

What is that probabbility that all the numbers lean one way with ALL the polling data and results flip flop the other way? What are the odds?

Its almost as if you are saying the pollsters aren't worth a shit.... that no matter what they do it is wrong. Well, I have more faith in the pollsters raw numbers than I ever will in the cooked numbers of diebold.

"....figure out that something is wrong with the polls."

See, now if you had said there must be something wrong with the polls and the machine counts, then the intellectual integrity of the masses would not have been violated, eh?



Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 07:15 PM
Response to Reply #125
126. No, I'm not saying
"the pollsters aren't worth a shit". I'm merely pointing out that if one poll (CNN) had the Dems up 20 points, and another (Pew) had them up 4, then at least one of them must have been wrong. And with a little math, it is easy to demonstrate that the error was greater than could be attributable to "chance".

So we know there was something wrong with some of the polls.

But you are right - the final margin (so far) was probably "significantly" smaller than the mean estimate of those ten polls. However, because we know (we don't guess, we know) that there was error in at least some of those polls that was not simply "sampling error", there is no way that we can infer that the difference between the average of the polls and the final result was due to error in the count. It might have been. But we know there was error in the polls.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 07:26 PM
Response to Reply #126
127. But we know
There are errors in the results. And the errors in the results are the only errors that matter. The only question is: how many errors were there?

TIA and I believe there were about 5 million errors and the poll people's numbers support that belief.

Can you even try to show that there were not 5 million errors in the results? Sure, you can try, but it would be impossible without examining the machines but the cursory machine examinations made here over the years jibe real well with the poll people. Making this equation: 1+1=2.

What you offer is 2-0=0. Nothing. See?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 07:44 PM
Response to Reply #127
128. Yes, we know
that there were errors in the results. And although I've been too busy with other things to look very closely, it seemed pretty clear to me that the Florida errors were in the Republicans' favor.

Now, of course, you are entitled to scale that up to 5 million if you want. But what you are not, statistically speaking, entitled to do is to infer it from "the poll people's numbers". They simply do not support that inference, and TIA's calculations are wrong.

If you want to guess a number, you might as well draw it out of a hat. But you won't win any more arguments doing it that way than using TIA's statistics. You can't get any sensible numbers out of a set of polls at which the merest glance is enough to tell you that they didn't even agree with each other, let alone with the final result.

If you have ten clocks all telling you a different time, you know that at least nine of them are wrong. Unless you know which ones, there is no way of knowing whether most of them are fast or most of them are slow.
Printer Friendly | Permalink |  | Top
 
BeFree Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 08:11 PM
Response to Reply #128
130. Poll peoples' numbers
Have a higher confidence level than dieblod's results.

TIA has a higher confidence level than many here.

Using clocks is just not too smart. Like comparing abacus to computers. I am surprised! But lets go with that..... of all the clocks here I do know which one is correct. TIA.

Printer Friendly | Permalink |  | Top
 
Awsi Dooger Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 01:16 AM
Response to Reply #126
131. I thought it was buncha shit
Just an observation. I follow trends and I've noticed a high probability that when someone responds to Febble using the word shit, she replies including the exact shitty quote, and very early in her response.

I hope I haven't encouraged anyone to test the theory, at least not regularly, to use a related word. :)



Printer Friendly | Permalink |  | Top
 
anaxarchos Donating Member (963 posts) Send PM | Profile | Ignore Sat Nov-25-06 10:56 PM
Response to Reply #69
95. You might want to exert that effort...

The central thesis in your link is summed up by this:

"If you want to see how the outcome compares to the pre-election polls, you need to look at the pre-election polls that list candidates by name from each and every congressional district. Someone mentione up-thread that this is precisely what folks like Charlie Cook and Stu Rothenberg did before the election, and their predictions were quite accurate."

Even a casual search on Charlie Cook and Stu Rothenberg and "generic polls" yields dozens of both pre-election and post-election comments by both using generic polls in a similar manner to TIA. Certainly, they checked the generics against specific contests and other polling but, in this election, the generics seem to have been the most important instrument used by both and this, both in their detailed projections and in their post-mortems.

There have been problems with generics in the past, but, they were clearly useful in 2006 and neither Cook nor Rothenberg is a very good source for your blanket dismissal. Given that, you may want to explain why a compilation of over one hundred generics constitutes "completely false assumptions" or a "worthless" analysis.

http://www.cookpolitical.com/
November 6, 2006

All Monday there was considerable talk that the national picture had suddenly changed and that there was a significant tightening in the election. This was based in part on two national polls that showed the generic congressional ballot test having tightened to four (Pew) and six (ABC/Wash Post) points.

(snip)

Furthermore, there is no evidence of a trend in the generic ballot test. In chronological order of interviewing (using the midpoint of field dates), the margins were: 15 points (Time 11/1-3), 6 points (ABC/Wash Post), 4 points (Pew), 7 points (Gallup), 16 points (Newsweek), 20 points (CNN) and 13 points (Fox).

In individual races, some Republican pollsters see some movement, voters "coming home," in their direction, and/or some increase in intensity among GOP voters. All seem to think that it was too little, too late to significantly change the outcome. However, it might be enough to save a few candidates. None think it is a major change in the dynamics of races, and most remain somewhere between fairly and extremely pessimistic about tomorrow's outcome.


http://blog.washingtonpost.com/thefix/2006/08/parsing_the_polls_wave_buildin_1.html

The answer, according to Charlie Cook and Stu Rothenberg, is a guarded yes.
"If you take an average of the last three or four polls, because any one can be an outlier in either direction, you can determine which way the wind is blowing, and whether the wind speed is small, medium, large or extra-large," said Cook. "The last three generics that I have seen have been in the 18 or 19 point range, which is on the high side of extra large. That suggests the probability of large Democratic gains."
"The generic surely reflects voters dissatisfaction with the President and his party and their inclination to support Democrats in the fall," agreed Rothenberg. "The size of the Democrats' generic advantage also can't be ignored. It too suggests the likelihood of a partisan wave, even though it does not guarantee the fate of any individual Republican incumbent."


http://rothenbergpoliticalreport.blogspot.com/2006/07/why-its-too-early-to-predict-what-may.html

Second, the ballot test in the July 10-13 Cooper & Secrest poll strongly mirrors the generic ballot in the district. Donnelly leads Chocola by 10 points in the ballot test (48 percent to 38 percent) while a generic Democratic candidate leads a generic Republican by 10 points as well (46 percent to 36 percent). The Democrats’ generic ballot advantage grew from 1 point in November 2005 to 10 points earlier this month, which also helps explain Donnelly’s improved standing in the July survey.


Personally, I'm not sure about the probability or extent of fraud in 2006... or in the practicality of mass fraud in mid-term congressional elections in general. Nevertheless, dismissing that possibility is not as easy as we might want it to be.


Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 08:11 AM
Response to Reply #95
105. an interesting post
I don't think anyone dismissed the "possibility" of mass fraud in 2006.

But you have yet to present evidence that Cook and Rothenberg ever used generic results "in a similar manner to TIA." Presumably anyone who compares your quotations with the OP can see what I mean. Cook and Rothenberg were cautious; TIA is scientistic.

If we are going to get hung up debating the meaning of "completely," I don't see how we will get much work done.
Printer Friendly | Permalink |  | Top
 
Skinner ADMIN Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 08:28 AM
Response to Reply #95
107. They're using the generic ballot in a similar manner to TIA? Oh really?
Edited on Sun Nov-26-06 08:29 AM by Skinner
Find me the place where they use generics to argue that millions of Democratic votes were lost. Or find me a place where they claim the generic ballot provides an accurate prediction of the final vote count.

Do they use generics to inform their predictions? Of course they do. Generic ballots are a useful tool. I have never argued that they are not, and I have never argued that they are completely worthless. What I have said is that they are not supposed to accurately predict the final vote. All of the quotes you have provided in your post are fully consistent with my argument.

And, for the record, I am not "dissmissing the possibility of fraud." I consider it offensive and misleading to suggest that I have some sort of hidden nefarious agenda. I fully support any and all efforts to secure our elections and to prevent and uncover fraud. But that doesn't mean that I am required to shut off my brain and pretend not to notice when people on my own side make glaring errors. TIA may not care, but in the real world credibility matters. You can't keep making very basic rookie mistakes without people eventually dismissing the entire effort as a fabrication.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 01:22 PM
Response to Reply #107
118. And for the record
Your last paragraph applies to me too, and, I suggest, to every ER poster who has dared to question the credibility of claims that polling data are clear evidence of fraud.

I see little good and much harm resulting from incredible claims. We have plenty of claims that are all too credible.

Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 08:29 AM
Response to Reply #95
108. Excellent...
let's take the assertion (TIA or EDA or whomever) and see if it makes sense. We can worry about statistical assumptions later. Statistics tell us where to look or how to describe the issue.

The magnitude of the problem doesn't matter - do you care if elections were hacked in 5 precincts or 500 precincts?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 01:26 PM
Response to Reply #108
119. For once I agree (partly)
Elections hacked in 5 precincts are as worrying as elections hacked in 500 - actually more so, as they are more likely to escape detection, and could nonetheless overturn results. Plus any hack casts doubt on the result in any race.

But that is precisely why I think it is important to do what we can to get a good estimate of the scale of the problem. Thinking it is on a scale of millions, when a scale of hundreds is (as I believe), more likely is not going to help target efforts on finding it.

I'd start in Florida.
Printer Friendly | Permalink |  | Top
 
bleever Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 12:06 AM
Response to Original message
98. Damn, it feels good to breathe fresh air again!
Edited on Sun Nov-26-06 12:13 AM by bleever
People I respect and have (if "respect" doesn't include it) affection for are now passionate about how best to determine how much the Bush regime has cheated us out of, and the best ways to make that clear to history, both far and near.

To everyone here, and their reasons for being here: :toast:
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 01:14 AM
Response to Reply #98
99. My reason for being here, Rich kids will not count my kids votes in the future
that is whats happening to us right now, whether anyone wants to believe it, THE RICH PEOPLE ARE COUNTING OUR VOTES IN SECRET, and to date we are allowing them to DO IT. Why?

Because they manipulate us to get into debates, that cover-up the fact that they (the rich people) Are indeed counting our votes in secret.

Printer Friendly | Permalink |  | Top
 
slackmaster Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 01:38 AM
Response to Original message
100. "Don't expect that this post will be peer-reviewed."
Edited on Sun Nov-26-06 01:48 AM by slackmaster
Why not?

Interested parties can review it on their own.

The math is way over the heads of most college graduates. Submitting the work for review by qualified people would clear up any question about the soundness of the methodology and computations.

What is TIA afraid of? That his underlying assumptions will be challenged and found lacking?
Printer Friendly | Permalink |  | Top
 
Name removed Donating Member (0 posts) Send PM | Profile | Ignore Sun Nov-26-06 02:21 AM
Response to Reply #100
102. Deleted message
Message removed by moderator. Click here to review the message board rules.
 
slackmaster Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 10:20 AM
Response to Reply #102
110. Peer review is the best way to get TIA some real credibility
Edited on Sun Nov-26-06 10:25 AM by slackmaster
Opinions posted here by anonymous people of unknown qualifications have made what appear to be some very pointed critiques of TIA's methodology. But without knowing who they are or who TIA is, even educated people don't have enough information to take either TIA or his critiques at face value.

kster wrote:

Can you get the powers that be, to open up their propietary "secret" vote countimg machines, or turn over OUR ballots, If you can not do this YOU DO NOT HAVE A CLUE!!

Your response does not in any way follow what I posted. It is a non-sequitur.

You are talking SHIT!

The whole point, kster, is that without some kind of credible peer review and audit of his work, out in the open for all to see, neither you nor I really have any way of knowing whether or not TIA is talking shit, or just plain self-deluded as researchers often become when they work isolation.
Printer Friendly | Permalink |  | Top
 
kster Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 03:01 PM
Response to Reply #110
121. The reason it does not make sense to you is
because I put it in the wrong spot, oops! Long night. You are correct about Peer review.
Printer Friendly | Permalink |  | Top
 
slackmaster Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Nov-26-06 03:55 PM
Response to Reply #121
122. Peace be with you, kster
Peace, and tranquility.

:beer:
Printer Friendly | Permalink |  | Top
 
Land Shark Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Nov-28-06 11:17 PM
Response to Original message
145. Kick, thanks for the detailed analysis
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-01-06 08:51 PM
Response to Original message
167. A good point by TIA....from the scholars!
As I've been learning about the world of pollsters (thanks to links from Febble and my own searching), I agree with much that I found in these articles from a single volumn of POQ. The exit pollsters have sacrificed accuracy for a single poll that is fast, but has too much error. TIA is appropriately using multiple sources that serve as checks and balances against each other, or multiple sources of evidence.

It seems to me, that many of the criticisms of the VNS exit polls are intuitively as serious as those committed by TIA, even if TIA's "math" and "assumptions" are not peer reviewed! Mitofsky (if you read between the lines) pretty much describes the poll failures (at that time) in "technical terms", and the connection of pre to exit is a suggested solution, as is the use of multiple polls, as is appropriate sample sizes, etc. In effect, VNS called elections with logic mistakes at least as "bad" as TIA is accused of on this blog, but the VNS had the responsibility and paycheck to get it right!

The pollsters also refuse to consider fraud (as alleged here on occasion), but admit they cut corners. The original poll designers also describe TV station analysis performing mathematical projections (comparison with previous year's polls) without the power and data to do the job, essentially not meeting the assumptions. Because nonreponse error is common and often cited doesn't mean that the more exotic, but equally bad mistakes by the VNS don't deserve attention; and considering fraud as a source of variation is clearly missing from the "scholars" as often as TIA suggests it as the problem.

"Only a poor craftsman blames his tools." :think:

References are in academic libraries in pdf:

The case for caution: This system is dangerously flawed
Anonymous
Public Opinion Quarterly; Spring 2003; 67, 1; ABI/INFORM Global
pg. 5

News organizations' responses to the mistakes of election 2000: Why they wil...
Kathleen A Frankovic
Public Opinion Quarterly; Spring 2003; 67, 1; ABI/INFORM Global
pg. 19

Voter news service after the fall
Warren J Mitofsky
Public Opinion Quarterly; Spring 2003; 67, 1; ABI/INFORM Global
pg. 45

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Dec-02-06 12:23 PM
Response to Reply #167
168. A couple of things:
First of all, far from being "clearly missing" from the scholars you reference, miscounted votes as a source of exit poll distortion makes an appearance on page 7 of your first link:

In Florida, those who did participate, that is, the people who did report how they had voted, assumed that their votes were being counted. That was not necessarily the case. Many votes were not counted. Also, many people voted incorrectly on what turned out to be a very confusing ballot. The hanging chads, the not-fully-perforated chads, and the butterfly ballot became famous icons of voter confusion and disenfranchisement in the aftermath of the Florida vote. The failure to record some intended votes may have further distorted the exit poll findings.


Nonetheless, I take your point (indeed it is one that I have been at pains to make myself) that there are plenty of known sources of inaccuracy in the exit polls. And we do not even need to consider fraud to establish the existence of non-sampling error in the pre-election polls, because they are all intended to be samples from the same population. We need only compute the between-poll/within-poll variance ratio, i.e. do an F test, to establish that they are not samples drawn from the same population.

You state that "TIA is appropriately using multiple sources that serve as checks and balances against each other, or multiple sources of evidence." Using multiple sources can serve as "checks and balances" against each other, but if you don't know the source of the error, you don't know whether the mean will be closer to the true value than any one poll. If you have one clock that tells the right time, and several that tend to run fast, you will not get a more accurate estimate of the time by taking the average than by looking at the one correct clock. You will get a less accurate estimate. The problem arises when you don't know which clock is correct. You might assume that errors are as likely in one direction as the other, in which case, your best guess will be the mean. Or you might assume clock makers play safe and design their clocks to err on the fast side, in which case your best guess might be that the slowest clock is most likely to be correct. Or, if you were worried you were going to be late, your best assumption might be that the fastest clock was correct.

This is why assumptions matter. We have absolutely no information as to which of those 10 polls is the most accurate - all we know is that at least 9 of them are biased in some way. If we have reason to think that poorer methodology may tend to result in pro-Democratic bias the best estimate might be that the polls with the lowest margins were closest to the truth. However, if we think that poor methodology results in randomly distributed error, our best estimate might be the mean. But without more information as to the likely sources of non-sampling error, we cannot make this judgement.

So I do not agree with you that TIA is "appropriately" using his multiple sources. He is using them inappropriately, IMO, in that by using the mean value of the polls as his estimate of the true value in the population, he is assuming that any non-sampling error has a mean value of zero, which is not a justified assumption, and he is ignoring the highly significant between-poll variance in his probability calculations, which is simply wrong.

And as always, I say - if you want to demonstrate election fraud with numbers, go find some real numbers. There are plenty in Florida!


Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 06:01 AM
Response to Reply #168
169. Without getting TOO technical..."blaming your tools" was the main point here.
It very common, as you know, for various forms of convergent and divergent validity studies to deal with the sources of error and identification of the more representative samples. The idea that different polls and different samples and even different (but similar questions) should be explainable is not a bad thought that TIA often expresses. If you put 2+2=? on a test in England and I do the same, do we need some exact "form" of question or sampling distribution? We want to know if people can add! If you suspect there is cheating in a classroom, then let's go watch that class more carefully. It's the same with polls and who you voted for or intend to vote for...

It's easy to "demonstrate fraud" in Florida. I have plenty of negative correlations between the undervote by precinct and Jennings %. That is not true for the Republican. How can a "random error" know which candidate to drop? Unfortunately, there are different levels of evidence required to "convince" others. My wife confronted the election supervisor in 2004 with a DRE that would not let her vote for Betty Castor! For us, that was fraud while for others it was a single machine glitch. For some, convincing evidence will never be enough, whether videos by Harris of poll tapes disappearing in Volusia county, or missing votes in Sarasota. Our lawyers use conclusions by "statisticians" (E-M and pollsters) that there is no "proof" at some magic level (.05, .01, .001 ?) to AVOID ELECTION REFORM and we need to deal with this issue! TIA adds some balance in an adversial legal world - even though both sides are not accurately estimating "probabilites that the null is true". TIA avoids "assumptions" and E-M "assumes the elections results are correct". To courts, the most convincing expert isn't right, just convincing! (OJ Simpson!)

TIA seems to focus on some good questions, and it's time to answer the questions instead of "blaming the tools." TIA is convincing, E-M is not! That's why I challenge the pollsters to make their system transparent and more responsive. Freeman seems to be putting together an attempt to do this, but I think the big companies like E-M ought to be able to address these things...if they want to! Arguments over the tools are not the only problem.... :dilemma:

IF poll items were "calibrated" so that they precisely assessed the "amount of democraticness and republicanness" in a respondent, and if there were a connected subset of items on pre- and exit polls, then one source of error would be modeled (measurement) and another COULD BE greatly reduced (representativeness). My accuracy at predicting an individual's probability to vote a certain way would be much better than asking them simply if "they plan to vote for the democratic candidate" or "who did you vote for in the last election". This may be something that pollsters do typically, but we can't get the data! That information would lead to more power to determine if demographic profiles and other predictors of generalizability apply to a given sample; or even if a result is so inconsistent with a local PRECINCT profile that there must be some hanky-panky!]
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 07:22 AM
Response to Reply #169
170. Well, I still
find your argument confused. TIA's claim is not that the polls were faulty, but that they weren't. And they clearly are. Therefore you cannot infer fraud, as he does, from the polls.

You may well infer fraud from the Sarasota data, and you can certainly infer that whatever was wrong in Sarasota diffentially disenfranchised Democrats.

There are many other good questions that need to be asked, but they are not, as far as I can see, ones that TIA is asking. On the contrary, he is asserting that fraud can be inferred from the polls, and it can't be, for the reasons I gave.

But I am interested in your own assertion that "our lawyers use conclusions by 'statisticians' (E-M and pollsters) that there is no 'proof' at some magic level...to avoid election reform". I have seen this suggested before, but I have seen absolutely no evidence for it. What lawyers are you talking about? Can you give any source for this assertion?

And your assertion that "TIA avoids 'assumptions" is simply wrong. He makes many assumptions, some of which are demonstrably unjustifiable, and in this case, actually belied by his own data.
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 12:42 PM
Response to Reply #170
172. Ok...
I'm not saying TIA is correct in his "mathematical assumptions". I'm saying that there are better ways to collect data, connect items in surveys, and infer fraud or eliminate other issues. We continue to focus on the probability methodology, not on the improvable process and instruments. TIA has "intuitively" suggested what we often do in research (even post hoc) when we look at convergent and divergent evidence - but he doesn't do that type of analysis and simply pools everything indicriminantly.

You are correct, we can infer "possible" fraud from some patterns, and discount some others, but we're NOT ABLE to narrow down the "causes" to ballot design, fraud, or machine errors, etc. without the precinct data. Poll data wit the correct questions MATCHED to certain precinct level problems would certainly help, expecially if the previous problem districts were targeted with larger samples, specific questions, etc. Pollsters don't consider that is their "responsibility". Unfortunately (and this goes to the lawyer questions), the lawyers and election supervisors who debate requests for revotes and process audits in front of judges, will write briefs or argue that they have "professional poll data" that doesn't show errors out of the "MOE", and various lawmakers will often decide NOT to pursue the issue. This is common in Florida, but I suspect local laws and other systems have some differences. Poll data is often part of the reports submitted to the court even when no experts are called. I don't know what will happen here in 2006, nor do I know what information various judges decided to accept as evidence and what they reject. I suspect it's a case-by-case issue. If I see a specific instance or document, I'll try to get one for you. When there is no revote, the documents for inaction aren't widely published.

In my world, NONE of this poll data would be acceptable as likert scales and similar items have to be transformed to interval/ratio level before we would consider using it for any high-stakes decision or parametric analysis. So I would not really do anything the way that pollsters do now, as I would say that they violated an "assumption" as badly as TIA to start with by applying many of the statictical processes they use to raw data. This is particularly true of survey's or interviews, so I'm not sure if criticism of TIA is warranted until the critics do better. Some of the articles I'm reading in poll journals now seem to report exactly the situation I describe, but there are different levels of sophistication.

In short, the answer to improving the poll data and finding fraud involved better processes and more transparency; not a single common exit poll and hide-away location. Some common connections across polls from generic to exit would also help.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 07:38 AM
Response to Reply #169
171. if you find TIA convincing, you should explain why
To elaborate on a small part of Febble's point, TIA assumes that the exit polls should be accurate within sampling error. There is extensive contrary evidence, some of it cited in your own sources, as you know.

Likewise TIA assumes that generic House polls should be accurate within sampling error, even though (1) they aren't mutually consistent and (2) TIA's own source indicates that the polls overstate the popular vote margin.

If you want better exit polls, fine, but that is a far cry from asserting in passing that TIA "avoids assumptions" and that you find him "convincing." That raises eyebrows.
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 03:04 PM
Response to Reply #171
173. I used the word convincing in the sense that a large part of this blog ...
as the public does. As stated before, Consumer Reports is "convincing", but not peer reviewed. The New England Journal of Medicine is "convincing", but often wrong and misunderstood. In a popular sense, TIA is convincing. Even if one doesn't know a bit of statistics, the "face value" of poll data and election predictions is NOT convincing. Pollsters who "weight" data are not publically convincing, partly because people see incorrect forecasts from polls and partly because the process is hidden.

In Sarasota, the "test" for voting error and all the undervotes is to run a fake election on 5 spare machines and 5 used machines with election staff doing the voting. Obviously, this doesn't tell us anything reliably about hundreds of machines used by actual voters! To the judge, if they don't spot a problem, it's CONVINCING that nothing is wrong!

If I took Freeman and TIA and even my own ideas and projected them into attempted improvements...so that in 2008, generic polls agreed to have some common connected questions with each other and with the exit polls (TIA/EDA). Exit pollsters were prepared with a "hit squad" in a van in Florida, Ohio, etc. to run to problem precincts to collect a large samples that would figure out what was happening (Sancho). If poll websites and interviewers voluntarily asked large samples of people to "revote" in order to confirm the election (Freeman); maybe even a set of DRE's in the parking lot to volutarily mimic the election and take a paper "receipt" home with you. If precinct level data was available, even if it were to qualified people who would report results, but maintain individual privacy (Febble? et. al.); if item analysis was available matched to some precinct demographics, etc...

These things would go a long way to finding the different machine errors, manipulations, voter suppressions, and similar concerns. Right now, TIA is CONVINCING even if based on (to use your convention) incorrect parametric assumptions that don't meet all the mathematical standards, but actually NONE of the quasi-experimental poll data is laboratory pure, so why not take the stuff on face value? That's the "public view."

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 07:30 PM
Response to Reply #173
175. hmmmm
I don't think TIA is much like Consumer Reports or the New England Journal of Medicine. I think he is much more like Answers In Genesis. And yes, some people -- some very smart people, in fact -- do find that convincing. It troubles me. He is still (last I checked) gabbling about "43/37 weights," and some people think he has won the argument. Oh well.

As for the rest, certainly exit polls could be redesigned to better serve the purpose of validating elections. I still think it's a better idea to fix elections, but if some folks want to jigger with exit polls as well, I don't object.
Printer Friendly | Permalink |  | Top
 
philb Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 11:46 PM
Response to Reply #173
178. I have stat & prob background, haven't followed details, but my analyses support TIA & Freeman's
Edited on Sun Dec-03-06 11:50 PM by philb
conclusions, based on my analysis of the huge election protection monitoring system voter reported irregularities in 2004, plus the lesser effort in 2006, also supported by monitoring results of Common Cause, other state oranizations and county SOE reports, VotersUnite machine problem reports, etc. and some detailed precinct analyses by county looking at elections since 2000. Making statistical assumptions I considered conservative, I found that
precincts that reported touch screen switching appeared to have a significant swing in votes in the direction of the switching- note that not all of the swing was necessarily due to switching, other things were seen to be going on in the same direction.
Likewise similar analyses found that in precincts with machine problems, broken machines, shortage of machines, and long lines- there was a significant reduction in official voter turnout compared to years with less long lines or to other precincts that didn't have long lines.
In other words, ironically, precincts with long lines usually have reported low official voter turnout-
not because of desire, but because of suppression/manipulation/etc.

In any case, based on these data bases and analyses and what I considered conservative assumptions, I estimated an approx.
300,000 vote swing in Florida in 2004 from Dems to Repubs; and over 150,000 vote swing from Dems to Repubs in Ohio- perhaps even higher if you take the huge number of minority voters purged in both states inappropriately in 2000, 2004, and 2006.

Similar numbers somewhat less were estimated for other states with lots of reported irregularties.
Altogether it comes to a significant swing through switching, glitches, suppression, purges, long lines in minority areas,
"glitches", manipulation of absentees, provisionals, etc.

And the reports for 2006 indicate that there were again significant problems with machine switching, "glitches", machine problems, long lines, voter suppression, large numbers unable to vote, systematic dirty tricks and malfeasanve to suppress minority votes, etc.

summary of EIRS and VotersUnite reported irregularities, augmented by some other reports
2006 www.flcv.com/eirstss6.html
www.flcv.com/eirsppp6.html
www.flcv.com/eirsoth6.html
www.flcv.com/eirsdt6.html

2004
www.flcv.com/summary.html
www.flcv.com/fla04EAS.html Florida
www.flcv.com/ohiosum.html Ohio
www.flcv.com/studentv.html student vote suppression (much happened again in 2006 apparently but not as much reports)


ps: I note the Dems seem to be competing with the Repubs in some areas they controlled this time in 2006 with switching and other things going on in a couple of states- in those states there seemed to be inappropriate things going on in both directions, depending on the race and area.





Printer Friendly | Permalink |  | Top
 
philb Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 11:56 PM
Response to Reply #178
179. ps: what is the probability that the 19.5% undervotes in Butler Co. Ohio in Cong. races was caused
by voters who just didn't think the Congressional races were important this year?
see close Congressional race thread, there was also another like it.

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Dec-04-06 09:27 AM
Response to Reply #179
182. hard to say, actually
This has nothing to do with TIA, but it's an important question in its own right.

The little piece in CD 1, the more competitive race, looks fine. Comparisons are tricky because some precincts are split, but p. 64 of the Butler County canvass report shows about 58% turnout in that race, which seems in line with others. I spot-checked Hanover's turnout in the governor's race, and the numbers seemed in line.

But of course most of Butler is in CD 8, Boehner's district. That race wasn't competitive at all, so it's hard to know what drop-off to expect. I don't know how far even the residual-vote gurus have gone in examining the empirical range of drop-off rates in down-ticket races. Certainly the circumstance isn't analogous to Sarasota, where the race was competitive, and where the undervotes were out of line with other counties and with the absentee votes.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Dec-04-06 08:58 AM
Response to Reply #178
181. that doesn't support TIA's analysis
To save TIA's analysis -- well, actually, nothing can save TIA's analysis. It's inherently flawed. But to save TIA's conclusion that the 2004 exits were accurate, you would need to come up with more like half a million votes in Ohio -- and those should be votes actually cast or believed cast, not votes prevented by long lines &c. People who don't vote because of long lines should never show up in an exit poll.

More strictly, TIA can (ironically) pretty much hide behind sampling error in any single state. But then his analysis isn't helping the cause very much.

Fraud/miscount arguments aren't all interchangeable and mutually supporting. I don't know what I think of yours (actually, it's hard to pin down the analytic results), but I know what I think of TIA's. Same as Skinner in #107: TIA makes basic rookie mistakes. If there is actually a strong case to be made that Kerry won the popular vote, then it's tragic that we have TIA's instead.
Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 06:58 PM
Response to Reply #168
174. The references and "assumptions" for the scholarly challenged...
Edited on Sun Dec-03-06 07:05 PM by Sancho
I considered the references on page 7 as problems, but not necessarily fraud. Not a big issue to me as long as we both understand.
-----whew----------
Assumptions of a subset of parametric statistics that survey people, the "short" list for theoretical folks and why polls will NEVER meet the assumptions "BETTER" than TIA. The question is what can you conclude after we all violate assuptions. These are the quick and dirty links from a newletter that I have handy, not the math....DU appears to have picked ONE issue to debate with TIA (nonresponse error). Can you imagine if we debated all of these?!?! :crazy:

--------------------------------
specific objectivity
http://www.rasch.org/rmt/rmt83e.htm
------------------------------
quality of data
http://www.rasch.org/rmt/rmt111n.htm
------------------------------
independence with raw measures
http://www.rasch.org/rmt/rmt72n.htm
---------------------------------
raw data is not interval or ratio level
http://www.rasch.org/rmt/rmt131e.htm
---------------------------------
Non-scaling problems with polls..
http://www.ingentaconnect.com/content/mcb/036/2006/00000023/00000004/art00003?crawler=true
---------------------------------------
comonotonicity
http://www.,medscape.com/medline/abstract/15701945
------------------------
non-normality and robustness
http://www.psychology-science.com/4-2004/10-vonEye.pdf
-----------------------
Likert scale assumptions
http://www.rasch.org/rmt/rmt82d.htm
------------------------------------------
sufficiency
http://www.rasch.org/rmt/rmt63c.htm
-----------------------------------------------
dimensionality
http://www.rasch.org/rmt/rmt201.pdf


Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 08:06 PM
Response to Reply #174
176. No, page 7
doesn't use the f word, but the point is the author does not assume the vote count is correct.

I think it is extremely important to distinguish between an assumption made for the purpose of projecting the official result, and assumptions made in trying to figure out why the raw responses differed from the official result. For the former, the pollsters assume the count is accurate, and re-weight to the count, in a perfectly standard post-stratification weighting procedure. For the second, they DO NOT make that assumption, although nor do they assume that their poll is accurate.

E-M did, in fact, test to see whether voting methodology was a contributor to the discrepancy (it was, in fact, but not in the direction suggested by the hypothesis that the election was stolen on DREs - the discrepancy was greatest for levers and punchcards), and I also tested fraud hypotheses.

But what I don't understand, Sancho, is why you keep mentioning likert scales. The "who did you vote for" is not a complicated questions. People did not vote for Kerry "slightly". The math does turn out to be complicated, because the relevant unit of analysis is the precinct, and the proportion of exit poll tallies versus the proportion of counted votes, for each candidate, and quantifying this discrepancy turns out not to be straight forward. But it isn't that we need to analyse the question more thoroughly, it's a problem that arises directly from the binomial theorem.

Here you go:

http://inside.bard.edu/~lindeman/ASApaper_060409.pdf

And almost certainly "non-response error" was not the greatest contributor. A far greater contributor to the discrepancy seems to have been that Kerry voters were more likely to be selected. This hypothesis was very strongly supported by the data, as redshift was strongly correlated with factors, such as long interviewing intervals, likely to have made it easier for unwilling voters to evade selection.





Printer Friendly | Permalink |  | Top
 
Sancho Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-03-06 09:54 PM
Response to Reply #176
177. I'm can see there is lots of disjoint methods...but here goes...
The fraud thing is semantics...where I saw something as more narrow. Not a problem, just a description.
------------------------------------------------
I looked at the paper, and WPE makes sense within a context to me, but one I have to put into my experience. (Sorry). I'll incubate on it.

I see most survey questions, regardless of "form": likert, guttman, thurstone...as variations on ordinal RATING of intent or value and I used "likert" as a generic term: the idea is that a threshold exists of the probability that a person would vote for a given candidate is based on their underlying amount of "democracy or republicancy value". The item ask ands attempts to get a resulting estimate without knowing the person's threshold!

If I have a set of questions: Who are you likely to vote for?, How much money do you make?, Who did you vote for last time?, What is your race?, etc...there is a cumulative conjoint probability (the "value" within the person) AND the estimate of the likelyhood that a given candidate "fits" that value. Both have to be estimated!

The first task is to get the calibration of items for a given person who responds so we can predict very accurately what the likelyhood is that they will cast a given vote for a candidate, but also toss the "misfitting" persons and fix the items with unlikely patterns and reach a cumulative probability for a set of persons from a poll that the sample will elect a given candidate. Calibrations are sample free and don't require normality. We can also determine during the same poll calibration the "value" of the candidate to voters on the same scale of "democraticness" compared to a set of candidates.
The next step is to take items that contribute to the calibration that are are useful as external "facets" (are you female, etc.) and see if that helps generalize the sample that may have a bias in selection. One thing I don't see in teh pollsters articles so far is that the calibration of generic to exit poll items would provide an estimate of the "fair average" in the case of estimating a predictible (how many demo's are registered?, what's the average income in the precinct? etc.) candidate and population, even with a "biased" sample. This would need the precinct public profile.

In the case of misfitting persons, items, or investigation of "facets" (contributing variation like the type of machine, etc.), we would hopefully be quick to focus on issues that arise and "send in the troops" whenever misfit is detected. Misfit would be "unlikely patterns" on any variable or persons given a calibrated set of voters and calibrated set of candidates.

This very quick description would only be possible with a reasonable size sample (depends on effect) and precinct level data. I would test for fraud with a conjoint probability, not just the error in the raw scores from the data alone without person fit or item fit as variables. That seems to be a difference between me and pollsters as far as I can tell. Calibrated items would be in an item bank that a given form could use, or even be delivered interactively, so the next item asked depended on the response to the last...

Finally, I would not use the "election results" as the criteria, I would use each candidate's value on a scale of more to less "democraticness". The election results may be ONE facet (item) in the calibration, but certainly not the only one. This would be much more precise and give an error estimate for every item for every person's vote for each vote for a given candidate. The nonresponse error would be less able to "muddle up" the true intent variance. In fact, a misfitting "item" (the results of the election) that was unlikely would be a quick clue that the election was "biased" as opposed to the sample outcome.

Finally, as you know, I'd do the absolute best I could to get a large and representative sample from specifically targeted precincts and races where error was expected. To me, WPE is one source of error, but I cannot see it EVER being modeled accurately with the insufficient information provided. It is only part of the information available as I see it. I honestly hope that helps and doesn't make things more confusing! To me, the pollsters are faced with an issue that needs some new efforts, but that's the first "model" I thought of as I have considered the issues so far.

A classic reference for this model would be. "Rating Scale Analysis" Wright & Masters, 1982.
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Dec-10-06 09:25 PM
Response to Original message
183. Kick for the Truth
TIA is not funded by anybody other than himself. He's not a party official, an association official.
He's not a academic or a consultant. He's just a guy who has worked his ass off for the past years every day in order to hold the election thieves feet to the fire and make them behave.

Kick for the TRUTH!!!
Printer Friendly | Permalink |  | Top
 
caruso Donating Member (48 posts) Send PM | Profile | Ignore Sun Feb-04-07 04:26 PM
Response to Original message
186. View this Nov. '06 thread in relation to the TIA FAQ Response
Edited on Sun Feb-04-07 04:30 PM by caruso
This TIA guy loves to crunch the numbers.


Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Sun Feb-04-07 05:01 PM
Response to Reply #186
187. I still think Skinner got it right
TIA's analysis is wrong; a bunch of folks have explained why the analysis is wrong; I'm still waiting for a succinct plain-English rejoinder, not that I think one is possible.
What about the preelection "generic" polls that showed Democrats leading by double digits? The generic polls ask whether respondents would vote for the (unnamed) Democratic or Republican candidate in their own district. Some fraud-minded observers have selectively quoted Bafumi, Erikson, and Wlezien's statement that the "generic polls turn out to be very good predictors" of the actual vote. But Bafumi et al. do not mean what these observers want them to mean. On the contrary, they argue that the polls "perform poorly as point estimates," and must undergo further analysis to "discount the exaggerated sizes of the generic poll leads." In fact, if we use Bafumi et al.'s model with Simon and O'Dell's estimate of the generic Democratic lead, the model projects an actual vote margin of about 7.8 points, close to the official returns. Bafumi et al. also report that their margin of error (95 percent confidence interval) for vote share is about 3.7 points—which means that their margin of error for vote margin is over 7.0 points. A predicted Democratic margin of "eight points plus-or-minus seven" hardly supports suspicions of massive fraud.

While generic polls on average tend to overstate Democratic margins, final Gallup polls have (on average) been more accurate. David Moore and Lydia Saad reported that for midterm elections from 1950 through 1990, the final Gallup poll had an average absolute error under 1.3 points on vote share—a record that continues through 2006. For what it is worth, in 2006, the final Gallup poll projected a 7.0-point Democratic margin, again close to the official returns.

Thus, generic polls actually don't support Simon and O'Dell's inference of massive vote miscount. Nor do polling results in individual races, where the polls name the candidates. Among all House races for which pollster.com reported poll results, the median Democratic vote margin was about 0.3 points larger than the pollster.com five-poll average. Limiting the analysis to competitive races yields similar results. In Senate races, the median Democratic candidate did 1.7 points better than the pollster.com average. Of course miscount is perfectly possible in individual races. Indeed, the evidence for miscount or some other frustration of voter intent in Sarasota County is very strong. But survey-based evidence of a "landslide denied" is hard to descry.

http://publicopinionpros.com/features/2007/jan/lindeman3.asp
Printer Friendly | Permalink |  | Top
 
caruso Donating Member (48 posts) Send PM | Profile | Ignore Mon Feb-05-07 01:28 AM
Response to Reply #187
188. TIA correctly forecast the mid-terms and districts where fraud would most likely occur
Edited on Mon Feb-05-07 01:49 AM by caruso
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 04:46 AM
Response to Reply #188
189. How do you know it was correct? n/t
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 06:05 AM
Response to Reply #188
190. The proof is in the pudding and he did a great job. Welcome to DU!
The geocities site is quite something isn't it?
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 07:34 AM
Response to Reply #188
191. that isn't responsive to the argument
Frankly, TIA pulls a typical shtick here. He expresses incredulity that I would say that Pew's 4-point margin was "probably not far off"; the final margin from official counts, as compiled by a Daily Kos diarist, is 7.9%. He asks why I "chose to believe" Pew instead of the "other 119 pre-election polls." Well, the Pew result is about 4 points off from the final margin, so TIA might check how many of his pre-election polls yielded Democratic margins between 4 points and 12 points. Answer: 72 of them. So why does TIA say that I "chose to believe" Pew?

But the real problem here is that TIA has no answer to the evidence that generic polls-of-polls tend to overstate the winning margin. He apparently has no evidence to rebut Bafumi et al.'s observation that the generic polls "perform poorly as point estimates," while regression equations "properly discount the exaggerated sizes of the generic poll leads."

It's weird when someone ignores an argument and then claims to have rebutted it.
Printer Friendly | Permalink |  | Top
 
caruso Donating Member (48 posts) Send PM | Profile | Ignore Mon Feb-05-07 12:01 PM
Response to Reply #191
192. TIA explains why the Generic Polls overstate the winning margin
Edited on Mon Feb-05-07 12:16 PM by caruso
Apparently, you did not read the text.
http://us.share.geocities.com/electionmodel/TruthIsAllFAQResponse.htm#UncountedSwitchedVoteMidTerm

You imply that the winning margin reflects the true vote (i.e. there is no fraud). Not true.

Uncounted votes are typically 3% of total votes cast - in every election. The net impact on the Democratic margin is a loss of about 1.5%, assuming 75% of uncounted votes are Democratic. So that accounts for the difference between the generics and the actuals in in the last 16 mid-terms.

Of course, this does not include the vote-switching, which has become endemic since Bush stole the 2000 election.

TIA points out the following identities:

The Intended Vote is given by:
IV = Recorded + Uncounted + Switched + Disenfranchised

The True Vote is given by:
TV = Recorded + Uncounted + Switched


I hope that is a satisfactory explanation for the discrepancies.

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 12:14 PM
Response to Reply #192
193. no, it isn't a satisfactory explanation
Has TIA ever read Bafumi et al.? Have you?

Their empirical result isn't that the generic polls overstate the official Democratic winning margin by 1.5% in every election, as one might expect if the vote count was consistently off by 1.5% and the generics were accurate. It's that the larger the generic margin, the larger the average discrepancy between the generic margin and the vote count. See, for instance, the charts on page 8 (PDF page 10) of their paper here.

Even if we suppose that vote spoilage has cost the Democrats about 1.5% in every election -- which is far from proven -- it doesn't explain these results. I'm not sure why this isn't obvious.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 01:08 PM
Response to Reply #192
194. A point worth making
about spoiled votes: even if Greg Palast is correct with his 3% figure, and even if the majority of those are Democratic votes, while this might result in pre-election polls over-stating the official count, it probably wouldn't account for a discrepancy in the exit polls, the reason being that precinct selection is weighted by the total vote from the last race, and is also stratified by the partisan division from the past race. The principle behind this is to ensure that every voter has an equal chance of being represented in the poll. However, if some voters, or groups of voters, are systematically excluded from vote count, they will also by systematically excluded from this calculation,

How do you select sample precincts?
The polling places were selected as a stratified probability sample of each state. The purpose of stratification is to group together precincts with similar vote characteristics. A recent past election was used to identify all the precincts as they existed for that election. The total vote in each precinct and the partisan division of the vote from this past race are used for the stratification. In addition, counties are used for stratifying the precincts. The total vote also is used to determine the probability of selection. Each voter in a state has approximately the same chance of being selected in the sample.


http://www.exit-poll.net/faq.html#a8

In other words, if the counting system tends to undercount votes from highly Democratic precincts, this under-count error will be passed on to the exit poll, so that these precincts (and thus their voters) will also be under-represented in the next poll. In other words, these voters disappear from both.

Like TIA, I am appalled by the high rate of residual votes, and its cost to Democratic candidates. Not only did it cost Gore the presidency, but it is simply a matter of civil rights, whoever those voters voted for. However, it is not at all clear that it can account for much for the "redshift" observed year after year in the Edison-Mitofsky exit poll. Although, it might interest TIA to know, if he happens to be hovering, that when I used a ratio measure of precinct-level discrepancy, rather than WPE, which is a subtraction measure, the discrepancy in highly Democratic precincts was more marked, and tended to show up in precincts serving large urban precincts and using non-digital voting technology.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 02:07 PM
Response to Reply #194
195. well, we were discussing generics, but
indeed the spoiled votes have been invoked to "explain" past exit poll discrepancies, as they were invoked here to "explain" past generic poll discrepancies. And both "explanations" fail to fit the data that they are purported to explain. (Although, your "although" is a very interesting one.)
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 02:39 PM
Response to Reply #195
196. I was aware the discussion was about generics
but elsewhere I have read the generic argument used to boost the exit poll argument and vice versa, and the 3% spoiled votes called in aid of both. Clearly spoiled votes could help to explain the generic poll discrepancy (but not all of it, even on a generous reading), but I don't see how they could explain much of the exit poll discrepancy, although I do think that there is some evidence in the data that they may explain a bit.
Printer Friendly | Permalink |  | Top
 
caruso Donating Member (48 posts) Send PM | Profile | Ignore Mon Feb-05-07 04:06 PM
Response to Reply #196
197. Some facts regarding the Barfumi, TIA, and Pew Generic models
Edited on Mon Feb-05-07 04:25 PM by caruso
The Generics polls, when adjusted historically for uncounted votes (and since 2002, for switched votes), were excellent predictors of the True vote count. Here is the proof.

1- The Barfumi study produced a projection model based on a multiple regression of 16 Mid-terms vs. Generic polls.The professors note:
"Based on the current average of the generic polls (57.7% Democratic, 42.3% Republican) the forecast
from this equation is a 55% to 45% Democratic advantage in the popular vote".

Of course, one would expect a deviation from the current average to the historical data,
since historically, 3% of the votes cast, mostly Democratic, are uncounted.

http://64.233.187.104/search?q=cache:zjC6OVUBA2kJ:www.pollster.com/guest_pollsters_corner/bafumi_erikson_wlezien_forecas.php+2004+generic+polls&hl=en&gl=us&ct=clnk&cd=3
_________________________________________________________________________________

2- The Barfumi model was very close to TIA's 120 Generic poll-based model which forecast a 56.43% Democratic vote share.

120 Generic Poll Linear Regression Trend Model
Dem = 46.98 + .0419x
GOP = 38.06 + .0047x

Substituting x = 120 and allocating 60% of the undecided vote (UVA) to the Democrats:
........ Trend + UVA = Projection
Dem = 52.01 + 4.42 = 56.43%
Rep = 38.62 + 2.95 = 41.57%
_________________________________________________________________________________

3- According to Wikipedia, the vote count on Nov.7 (before the fraud kicked in) closely matched the TIA and Barfumi 120-poll Generic projection model to within 1%.
http://en.wikipedia.org/wiki/United_States_House_elections%2C_2006

Party Seats............... Popular Vote
.......2004...2006
Dem..202...233...+31....39,267,916...57.7% +11.1%
Rep...232....202...-30....28,464,092...41.8% –7.4%
Indep...1......0....-1.............69,707....0.1% + 0.5%
Other...0......0.....0...........255,876....0.4% –3.2%

Total.435....435.....0....68,057,591 100%

_________________________________________________________________________________

4- This 2002 Pew analysis of Generic polls from 1954-2000 indicated that they matched the vote count to within 1.1%.
http://people-press.org/commentary/display.php3?AnalysisID=55

But it's even closer than that since the Democrats lose approximately 2.25% of the 3% (spoiled, lost) votes which are NEVER COUNTED. The loss reduces the Democratic margin by 2.25% - 0.75% = 1.5%. THEREFORE, THE HISTORICAL GENERIC POLLS CAME WITHIN 0.4% OF THE TRUE VOTE; THAT IS, UNTIL BUSHCO CAME ALONG AND STOLE THE 2002 MID-TERMS BUT FELL SHORT IN THE DEMOCRATIC TSUNAMI OF 2006!
_________________________________________________________________________________

Why The Generic Ballot Test?
Released: October 1, 2002

Throughout the election season, the Pew Research Center and other major polling organizations report a measure that political insiders sometimes call “the generic ballot.” This measure is the percentage of voters in national surveys who say they intend to vote for either the Republican or the Democratic candidate for the U.S. House of Representatives in their district.*

*(If the elections for U.S. Congress were being held today, would you vote for the Republican Party’s candidate or the Democratic Party’s candidate for Congress in your district?)

There is no national election for Congress, of course; rather, 435 individual races determine the composition of the House. So while it might seem that the generic ballot is too broad a measure to forecast the outcome, it has PROVED to be an ACCURATE predictor of the partisan distribution of the NATIONAL vote.

The final forecast of the generic House vote and the actual vote totals have PARALLELED each other VERY CLOSELY for nearly a half-century in U.S. elections. The average prediction error in off-year elections since 1954 has been 1.1%. The lines plotting the actual vote against the final poll-based forecast vote by Gallup and the Pew Research Center track almost perfectly over time.

More..
_________________________________________________________________


5- The 11/6 Pew pre-election RV poll (2369 respondents) had the Democrats ahead by 48-40% with 10% undecided. Allocating 7.5% of the 10% undecided to the Democrats and the projection becomes 55.5Dem-42.5R. Allocating the undecided in the Pew LV poll (47-43) and the projection becomes 53D-45R.

So maybe Pew was not that bad an outlier after all.

Profiling the Voters
Pew Research Center for the People & the Press
November 6, 2006
Congressional Vote and Issue Preferences

Among Registered Voters

Question: If the 2006 elections for U.S. Congress were being held TODAY, would you vote for the Democratic Party's candidate or the Republican Party's candidate for Congress in your district? As of TODAY, do you LEAN more to the Democrat or the Republican



_________________________________________________________________

6- This is a relevant post on Generic Polls by DUer Time For Change.
http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=364x2859666


7- This is from a TIA analysis of the final 10 Generic Polls:

Poll Date Dem Rep Margin
Harris1023 47 33 14
AP 1030 56 37 19
CBS 1101 52 33 19
Nwk 1103 54 38 16
TIME 1103 55 40 15
CNN 1106 58 38 20
FOX 1106 49 36 13
Outliers:
Pew 1104 47 43 4 (RV poll: Dem 48- Rep 40)
ABC 1104 51 45 6
USA 1106 51 44 7

The Outliers Projected (75% undecided to Democrats)
Pew 1104 53 45 (RV: 55.5-42.5
ABC 1104 52.5 45.5
USA 1106 53 45


Poll Averages:
3-outliers
Avg 49.67 44.00 5.67
2-pty 53.02 46.98

7-polls
Avg 53.00 36.43 16.57
2-pty 59.27 40.73

10 polls 52.00 38.70 13.30
2-pty 57.33 42.67

That should do it.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 04:54 PM
Response to Reply #197
198. Bafumi's regression formula:
Edited on Mon Feb-05-07 05:01 PM by Febble
Bafumi, Erikson and Wlezien concluded that it was possible to forecast the result accurately using generic polls.

http://www.temple.edu/ipa/Documents/Forecasting%20House%20Seats%20from%20Generic%20Congressional%20Polls.pdf

They give the formula here:

The easy part is forecasting the vote from the generic polls. To properly interpret the generic polls, we estimate a regression equation predicting the vote in the 15 most recent midterm elections, 1946-2002, from the average generic poll result during the last 30 days of each campaign. (Details are shown in the appendix.) Based on this analysis, we can confidently offer the following rule of thumb for predicting the national vote based on polls over the last 30 days before the election:
1. convert the percentage point lead, e.g., Democrats 51% Republicans 41%, in the generic poll to a percent Democratic of the two party vote, e.g., 51-41 converts to 55% Democratic or 5% more Democratic than 50-50;
2. if the poll is based on registered voters rather than “likely” voters, subtract 1.5 percentage points—thus a 56%-44% Democratic lead in a registered voter poll converts to a narrower 54.5%-45.5% lead in terms of likely voters;3
3. cut this lead in half; and
4. add a percentage point to the Democrats as a reward for being the non-presidential party.
From the regression analysis, our 95% confidence interval for the forecast using this formula is +/-3.7 percentage points.


You will note that this formula assumes that a registered voter model will be more generous to the Democrats than a likely voter model, and that the generic poll will overstate Democratic support. Their model, at the time of your link, forecast a 32 seat gain for the Democrats. IIRC, the gain was 30 seats, and of course should have been 31, from what we know of Sarasota. So, not a bad model.

But of course it tells you nothing about why a model that discounts Democratic support in the poll should be a good predictor of the official vote. All it tells you is that the effect seems fairly generalizable.

Edited to correct spelling of Bafumi's name
Printer Friendly | Permalink |  | Top
 
caruso Donating Member (48 posts) Send PM | Profile | Ignore Mon Feb-05-07 07:45 PM
Response to Reply #198
200. Summary of Generic models compared to the Wikipedia count
Edited on Mon Feb-05-07 07:47 PM by caruso
The projection models compare quite nicely to the Wikipedia vote count.

But the Final 1pm National Exit Poll was far off.
It was matched to the recorded vote count.

Check the weights: 49 Bush/43 Kerry.
When the weights were adjusted to a plausible 46/49 (based on the 12:22am 2004 National Exit Poll), the Final numbers agreed with the rest.


.............Projection..2-party
Model..... Dem Rep.. Dem Rep
Nat Exit 56.70 42.10 57.27 42.73
Generic 56.43 41.57 57.58 42.42
Barfumi 55.00 45.00 55.80 44.20

Average 56.04 42.89 56.88 43.12

Wikipedia 57.70 41.80 57.99 42.01
Deviation -1.66 1.09 -1.11 1.11

120 Generic Poll Linear Regression Trend Model
Dem = 46.98 + .0419x
GOP = 38.06 + .0047x

Substituting x = 120 and allocating 60% of the undecided vote (UVA) to the Democrats:
........ Trend + UVA = Projection
Dem = 52.01 + 4.42 = 56.43%
Rep = 38.62 + 2.95 = 41.57%
_________________________________________________________________________________

National Exit Poll

VOTED 2004
--------- 7:07pm -------- --- 1pm Final --------- --- Adjusted Weights -----

........ MIX Dem Rep Other MIX Dem Rep Other MIX Dem Rep Other
Kerry 45% 93% 6% 1% ..43% 92% 7% 1% .....49% 93% 6% 1%
Bush 47% 17% 82% 1% ..49% 15% 83% 2% ..46% 17% 82% 1%
Other 4% 67% 23% 10% ..4% 66% 23% 11% ..1% 67% 23% 10%
DNV.. 4% 67% 30% 3% ...4% 66% 32% 2% ....4% 67% 30% 3%

TOTAL 55.2% 43.4% 1.4%... 52.2% 45.9% 1.9%... 56.7% 42.1% 1.2%

Wikipedia Summary of the November 7, 2006
United States House of Representatives election results
http://en.wikipedia.org/wiki/United_States_House_elections%2C_2006


Party Seats................ Popular Vote
......2004 2006
Dem 202 233 +31 39,267,916 57.7%
Rep. 232 202 -30 28,464,092 41.8%

Total 435 435 0 68,057,591

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 08:18 PM
Response to Reply #200
201. Wikipedia, vox dei?
If the Wikipedia numbers you cited are wrong, then one might not want to boast that a projection model approximated them.

They seem to have appeared for the first time on November 18, courtesy of... well, some anonymous IP. Here's the previous version. And here's a later version (also courtesy of some anonymous IP).

On what warrant do you believe that numbers that appeared on Wikipedia as if out of nowhere, on November 18, represent "the vote count on Nov.7 (before the fraud kicked in)"?
Printer Friendly | Permalink |  | Top
 
caruso Donating Member (48 posts) Send PM | Profile | Ignore Mon Feb-05-07 09:43 PM
Response to Reply #201
202. Strange how Wiki numbers changed but the totals remained the same.
Edited on Mon Feb-05-07 10:41 PM by caruso
I see that Wiki just deleted their original numbers. They must have just noticed this thread. Too bad.

Are these the post-fraud numbers?

http://en.wikipedia.org/w/index.php?title=Template:United_States_House_election%2C_2006&direction=next&oldid=94122969
Is this the recorded vote that the Final NEP matched to?

Revision as of 20:00, 20 December 2006

Party Seats Popular Vote
.....2004 2006 Vote %
Dem 202 233 +31 39,673,226 52.0% +5.4%
Rep 232 202 -30 34,748,277 45.6% –3.6%
Ind 1 0 1 0 69,707 0.7% +0.1%
Oth 0 0 0 0 255,876 0.9% –2.7%
Total 435 435 0 100.0% 68,057,591 100.0% 0

The Democratic vote total increase by 400 thousand.
The Repub total increased by 6 million. WTF?
Can you explain it?

Or are these?
http://en.wikipedia.org/wiki/United_States_House_elections%2C_2006
Funny how the total votes don't match.

Summary of the November 7, 2006 United States House of Representatives election results Party Seats Popular Vote
2004 2006 +/− % Vote % +/−
Democratic Party 202 233 +31 53.6% 39,673,226 52.0% +5.4%
Republican Party 232 202 −30 46.4% 34,748,277 45.6% –3.6%
Independents 1 0 −1 0 501,632 0.7% +0.1%
Others 0 0 0 0 1,305,803 1.7% –1.9%
Total 435 435 0 100.0% 76,228,938 100.0% 0


-------------------------------------------------------

Seems to me that there was some fishy stuff going on after the initial Nov.7 numbers were posted. Anything to do with the changes from the Nov.7 NEP (7pm) to the Nov.8 Final (1 pm)?




Printer Friendly | Permalink |  | Top
 
caruso Donating Member (48 posts) Send PM | Profile | Ignore Mon Feb-05-07 10:46 PM
Response to Reply #202
203. Who just told Wikipedia about this thread?
Edited on Mon Feb-05-07 11:13 PM by caruso
Seems to me that there was some fishy stuff going on after the initial Nov.7 numbers were posted. Anything to do with the changes from the Nov.7 NEP (7pm) to the Nov.8 Final (1 pm)?

This is what they had up until a few minutes ago.
Party Seats............... Popular Vote
.......2004...2006
Dem..202...233...+31....39,267,916...57.7% +11.1%
Rep...232....202...-30....28,464,092...41.8% –7.4%
Indep...1......0....-1.............69,707....0.1% + 0.5%
Other...0......0.....0...........255,876....0.4% –3.2%

Total.435....435.....0....68,057,591 100%

----------------------------------------------------
Are these the post-fraud numbers?

http://en.wikipedia.org/w/index.php?title=Template:United_States_House_election%2C_2006&direction=next&oldid=94122969
Is this the recorded vote that the Final NEP matched to?

Revision as of 20:00, 20 December 2006

Party Seats Popular Vote
.....2004 2006 Vote %
Dem 202 233 +31 39,673,226 52.0% +5.4%
Rep 232 202 -30 34,748,277 45.6% –3.6%
Ind 1 0 1 0 69,707 0.7% +0.1%
Oth 0 0 0 0 255,876 0.9% –2.7%
Total 435 435 0 100.0% 68,057,591 100.0% 0

The Democratic vote total increase by 400 thousand.
The Repub total increased by 6 million. WTF?
Can you explain it?

Note that the vote total 68,057,591 is from the original count.
I guess they're in the process of changing the numbers right now.

Who just tipped them off? Inquiring minds want to know.

Or are these the post-fraud numbers?
Those Wiki guys are sure fast on their feet.

http://en.wikipedia.org/wiki/United_States_House_elections%2C_2006
Funny how the total votes don't match.

Summary of the November 7, 2006 United States House of Representatives election results Party Seats Popular Vote
.................2004 2006 Vote
Dem 202 233 +31 53.6% 39,673,226 52.0% +5.4%
Rep 232 202 30 46.4% 34,748,277 45.6% –3.6%
Ind 1 0 ;1 0 501,632 0.7% +0.1%
Others 0 0 0 0 1,305,803 1.7% –1.9%
Total 435 435 0 100.0% 76,228,938 100.0% 0

-------------------------------------------------------




Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 11:34 PM
Response to Reply #203
205. say whaa?
Look, friend, wikipedia is actually not that hard to figure out. Unless you really think that someone has figured out how to rig the entire change history so that it appears that the numbers you cited were posted on November 18, and the later numbers were posted on December 20. I can't rule it out. I do know that the numbers changed more than a few minutes ago.

I don't agree with either set of numbers, although the latter set looks closer.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Feb-05-07 05:58 PM
Response to Reply #197
199. sorry, no
1. In other words, in your own favored example, Bafumi et al. predicted a 5.4-point difference between the generic poll margin and the projected actual margin. You might be so good as to concede my point directly.

Moreover, the projection you cite (D+10) is closer to the official result (D+7.9, as far as I know) than it is to TIA's projection (D+14.86). If one were to incorporate later polling results, Bafumi et al.'s projection would be even closer to the official result -- but it is well within the margin of error regardless. For that matter, TIA's projection is also within Bafumi et al.'s margin of error. Generics are better than nothing, but they don't yield pinpoint predictions.

2. TIA's "model," apart from the allocation of undecideds, does nothing except to project the final generic poll results. To compare it with Bafumi et al.'s model is to ignore Bafumi et al.'s point that the generics on average overstate winning margins. (And, to repeat my point that you ignored, the overstatement is nothing like a fixed 1.5 points, so we have no basis for attributing it to ballot spoilage.)

3. First, check your link. Then explain why and how you think Wikipedia(!?) temporarily had exclusive access to "pre-fraud" vote counts. (I think the Wikipedia numbers are still wrong, by the way. I think +7.9% is probably pretty much right.)

4. Look again, and this time read the chart. Again, I have explained this before. This story indicates that the Gallup results (up to 1990) and the Pew results (1994-2002) were accurate within 1.1% -- not all generic polls in general. (We already knew that couldn't be true, because we read Bafumi et al. Right?) In 2006, the final Gallup and Pew results were D+7 (Gallup/USA Today) and D+4 (Pew). This does not help your case.

5. Well, if you want to argue that Pew 'really' predicted D+8 -- which now appears to match the official result within 0.1% -- who am I to dispute it? But no, I don't approve of ignoring the LV results. Whatever.

6. In what way is it relevant? Maybe you should focus on trying to make your own arguments for a while.

7. So in point 4 you use Gallup and Pew to argue that generic polls are accurate, although in point 1 you tacitly conceded that they tend to ride high, and now here in point 7 you chuck out Gallup and Pew as "outliers" -- even though Gallup's +7 is actually closer to the mean (+13.3) than +20 is. That is, charitably, unpersuasive.
Printer Friendly | Permalink |  | Top
 
caruso Donating Member (48 posts) Send PM | Profile | Ignore Mon Feb-05-07 10:49 PM
Response to Reply #197
204. Self-delete
Edited on Mon Feb-05-07 11:05 PM by caruso
.
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Oct-25-07 09:08 PM
Response to Original message
206. Still great after all this time, the truth is all. n/t
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Wed May 29th 2024, 06:12 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Election Reform Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC