2016 Postmortem
Related: About this forumHow to interpret polls and their accuracies
For only those who really care, this is how the accuracy of polls is actually measured:
The margin of error of polls that you hear on TV are not normally stated with full accuracy. They should be stated like this example: "This poll is calculated to be accurate within 3%, 95% of the time." (The 95% figure - or two standard deviations from the mean - is the figure most used for polling results.)
Since the accuracy of poll results are measured on a bell shaped curve, for our example this means that there is a 2.5% chance that the actual result will be greater 3% inaccurate on the negative side and a 2.5% chance of it being greater than 3% inaccurate on the positive side. For instance lets say that in our example poll the results state that a candidate will take 50% of the vote. What the poll is really stating is that there is a 95% chance that that candidate will take between 47% and 53% of the vote, a 2.5% chance that the candidate will take less than 47% and a 2.5% chance that the candidate will take more than 53% of the vote.
Also unstated it the fact that because of how probabilities under a bell shaped curve work, in our example poll there a fairly high likelihood that the poll the will be far more accurate than 3% either way. In our our poll example there would be a approximately a 68% chance that the actual results will be within 1.5% either way.
If you want to understand why an individual poll can be "off" quite a bit, but usually aren't, consider this example. Let's say that you have a thousand marbles in a jar, with 900 being green marbles and 100 being red. If you blindly took 100 marbles out of the jar, there is still a remotely small possibility that all of them will be red. So while it is unlikely, a particular poll can be somewhat off, but it is very highly unlikely that several polls which provide very similar results will be off by much.
Because their reputations are at stake, good polling companies go to extraordinary lengths to insure that the samples their polls are based on as accurately as possible represent the target population. For instance if a certain percentage of the population uses only cell phones, they will call cell phone numbers to insure that those people are properly sampled. Or let's say that 20% of the young people between 18 and 25 normally vote in elections, but because of special circumstances of a particular election, the preliminary polling data indicates that 50% of young people will vote this time around. The the polling companies will adjust their samples to include a larger percentage of young people.
Also keep in mind that polls are only accurate "if the election is held today" (the day that the sample is canvased). They cannot take into consideration the effect of unpredictable events which will occur between when the poll is taken and election day. Nor can they predict how conditions like very bad weather on election day will affect the results unless they are specifically formulated to do so. Of course the closer to election day the polls are taken, the more accurate they are likely to be.
The bottom line is that while a particular poll might be somewhat off no matter how diligent the polling company is choosing and canvasing their sample, polls are far more accurate than we believe them to be are when our candidate is behind, and less accurate than than we believe they are when our candidate is ahead.
ThePhilosopher04
(1,732 posts)CajunBlazer
(5,648 posts)Dem2
(8,168 posts)Fumesucker
(45,851 posts)rjsquirrel
(4,762 posts)Or is you serious?
CajunBlazer
(5,648 posts)You can always remain ignorant if you choose - the choice is always yours.
rjsquirrel
(4,762 posts)I'm a scientist. I know how polls work better than most because I understand math.
My point was that the OP is obvious.
Bonobo
(29,257 posts)longship
(40,416 posts)CajunBlazer
(5,648 posts)comradebillyboy
(10,183 posts)just highlight the word and opt for the spell check option in the window that pops up.
draa
(975 posts)then you mentioned worrying about reputations. Have you ever heard of Rasmussen (don't answer, of course you have). They have been the worst polling outfit for over a decade. And they still get work so...
If you're not using an aggregate of polls then you're not doing it right.
It might also help if you corrected the misspelling in the title. Interpret is misspelled.
CajunBlazer
(5,648 posts)Not my definition of a good polling company.
Bluenorthwest
(45,319 posts)letters strung together? My eyes are still burning from all the racist anti Obama material and pure hate of Hillary you linked to so it's sort of hard to be sure what I'm seeing here.
CajunBlazer
(5,648 posts)As for the sites, they were taken at random without any scrutney at all simply to make a point - get over it.
As to what you think about me, "Frankly my dear, ......"
rjsquirrel
(4,762 posts)Bonobo
(29,257 posts)NCTraveler
(30,481 posts)Katashi_itto
(10,175 posts)Would have saved you a ton of time instead of writing this pablum.
AgingAmerican
(12,958 posts)See who paid for the poll. Then you will know how accurate it is.
Sancho
(9,070 posts)When different pollsters using different mythologies and different samples obtain similar results, that creates considerable confidence that the survey is a true reflection of the population values.
In this election, the results have been pretty stable for a while. Most demographics are also consistent with expected voter behavior. Unless something changes, the current polls are likely the way voters will vote, especially on the Democratic side where there are few choices and a stable message from each candidate.
Besides polls, endorsements by current politicians, unions, and campaign organization predict candidate success. Interestingly, crowd enthusiasm has rarely been a good predictor, and social media is unknown (even though it might be projected to parallel crowd enthusiasm). Crowds enthusiasm can't assess non-ignorable non-respondents.
http://jeb.sagepub.com/content/14/2/121.abstract
So far in 2015, all indicators are pointing the same way. The standard error as a statistic becomes irrelevant in light of multiple measures if they continue to give the same answer.