Why public opinion polls are garbage

I realize that American public opinion polls have tenuously little to do with development or conflict, but I do sometimes write about statistics. (And, like most everyone, I am obsessed with tomorrow’s primary. Even though I can’t (yet) vote in this country.)

Back to polls. It seems that anyone who puts faith in these instruments is in for a rude surprise.

American Poller John Zogby recently appeared on The Daily Show to talk about polling during the U.S. Presidential primary season.

Stewart: How many people do you call? What’s the sample size? What’s…

Zogby: About 850, 900.

Stewart: And how many people answer? To get 850 to 900 to answer, how many do you have to call?

Zogby: We have to move into the next state sometimes.

Stewart: So when they say 850 to 900, you might have called 10,000 people?

Zogby: Ohhh, 6 000 maybe 7 000.

Stewart: 100,000. You call everyone in New Hampshire. So the polls should always say: remember, this has a plus or minus error of the loneliest people in New Hampshire?

Zogby’s response implies a survey attrition rate of more than 85 percent. And that’s if I’m hearing right. He might have said 60 000 or 70 000.

Short story: Polling numbers mean nothing. Garbage. John Stewart is evidently a more skillful statistician than every reporter in this country.

6 thoughts on “Why public opinion polls are garbage

  1. Dear Professor Blattman,

    I have been a bit skeptical of the validity of public opinion polls, and I have two questions..

    1. Does the magnitude of a survey attrition rate matter, given (sufficiently) large sample size?

    2. Also, what if the reason of attrition is irrelevant to outcome (ie, fot whom the people vote)?

    Thank you in advance for your answer.

    Bests,

  2. If attrition is random, or irrelevant to the outcome, then it doesn’t matter. Probably the margin of error should be increased to reflect the decrease in accuracy, but the number should not be biased.

    But who answers a telephone survey and who does not is in no way random. How is it not random? We have no idea. Probably wealthier, busier, younger, and English-as-second-language people are more likely to be missed. There’s probably research on this.

    Moreover, not everyone has a listed telephone number. My understanding (perhaps wrong) is that cell phones are not called. Hence their sample frame is already biased.

  3. “There’s probably research on this.”

    Made me chuckle.

    There’s quite a bit indeed. See John Brehm’s work, for starters.

  4. PS: Zogby is not exactly a reputable pollster, as anyone who knows anything about public opinion/behavior will tell you.

  5. @anonymous: I’m curious for more. What kind of attrition does a more reputable poller typically obtain, and do they correct for it? If not (and I don’t see how you can, really) I can’t imagine they get better that 50%, which is still atrocious. I’d be happy to be shown wrong.

  6. Chris, there’s lots of information about these topics at Pollster.com — where, indeed, Zogby is viewed with skepticism, mainly because he is exceedingly unforthcoming about the details of his polls (much as on Jon Stewart’s show).

    A few other thoughts. Non-response is not random, generally, but obviously pollsters weight to account for differences between their sample and canonical data like the Census. Clearly, weighting is more difficult in specialized cases — i.e., where you are trying to sample likely voters. Even still, late pre-election polls tend to be quite close to the actual outcome. (Which makes the occasional miscues — like NH — all the more noteworthy.)

    Cell phones. The Pew Center has done a ton of research on the consequences of cell phones for political polling. Their data suggest that cell phone-only respondents do not have significantly different political preferences. Thus, their exclusion from a traditional phone poll has little consequence. (That may not be true if you’re interested in non-political topics, and obviously that may not be true at some future date when even more people use only cell phones.)

    On attrition. I have information from a major media pollster from a January 2006 poll. To get a sample size of 1001, they made 8,241 calls. HOWEVER, most of those are not refusals. Only 13% of calls ended because of explicit refusals. Most ended either because it was a non-working number (35%) or because no one answered (39%).

    Is that a problem? Again, the Pew Center folks have done research (published in POQ) that finds that working harder to engender a higher response rates has little effect on the survey’s results.