October 18, 2010

MIKE TIPPING: Polls always reflect the pollster, if not the voters

Mike Tipping

Public opinion polls are one of the simplest and most confusing parts of any election season.

They're simple in that they give the public a basic, easily understandable snapshot of a political campaign. One candidate is winning, others are losing; or maybe there's a tie, or one candidate is "gaining momentum." They provide a small set of numbers that allows the media to draft a clear narrative of a race.

But polls don't always agree. Even polls taken at the same time, and in a similar fashion, can produce very different results.

To some degree this is just statistical noise. Polls only approximate the truth and the results fall somewhere along a bell curve. Most polls conducted by professionals, however, use methodologies based on solid theories of how elections work. Most, then, will produce results reasonably close to one another and to the actual state of the electorate.

There are, however, significant differences in how pollsters construct their models and arrive at their final numbers.

Over the past few days, I've talked to five different pollsters from four different Maine polling firms. Each has their own methodology and each has criticized at least part of their competitors' way of doing things. They disagree on whether it's better to have live survey operators or recorded "robopoll" questions, if surveys should be weighted to reflect the demographics of the Maine electorate -- like age, gender and party identification -- and on how questions should be worded to obtain the most accurate results.

How can what's supposed to be a scientific enterprise be so contentious and leave so much room for disagreement?

The answer is polling is just as much art as science. Here's an example:

Mark Smith and Stephanie Dunn, owners of the Maine Center for Public Opinion, were kind enough to take me through the process for their latest poll, conducted in early October.

They began by determining the voter history in every Senate district in Maine and using it to create a quota for the number of people registered in each political party that they wanted to sample in each district.

(Polling by regional quota doesn't meet with universal approval by pollsters, but it is used by some firms. Fox News, for instance, does national polls using regional quotas from groups of states.)

Next, Smith and Dunn determined how they would mathematically alter their sample to reflect what they believe will occur on Election Day. This is where the artistry of polling took center stage.

The commissioner of this poll, Matthew Gagnon of the Pine Tree Politics blog, suggested Smith and Dunn add an "oversample" of Republicans into their quotas to account for what he expects to be a surging GOP turnout.

"I wasn't comfortable with that as a solution," said Smith. "I prefer solutions based on facts, not gut feel, so I did a comprehensive study of voter turnout by party enrollment in each state Senate district for each statewide election going back to the November 2006 general election."

From this study, Smith chose two elections that had higher than average Republican turnout -- the recent June primary and the 2009 gay marriage referendum. Then he averaged the Republican increases together, divided by two, and changed quotas in the various districts until they matched his new model.

They then had their telephone bank call registered voters in Maine until they had responses from the number of likely voters enrolled in each party, in each Senate district, that they desired.

Their results for the governor's race: 29.6 percent for Paul LePage, 28.7 percent for Libby Mitchell 11.1 percent for Eliot Cutler, 4.9 percent for Shawn Moody, 1.6 percent for Kevin Scott and 24.1 percent undecided.

Smith and Dunn could have done many things differently: They could have weighted their sample, perhaps by age or gender, or used quotas based on a different geographical division. What stands out most to me, however, is how they determined their targets based on party registration.

Despite his preference for "solutions based on facts," Smith neither offered scientific reason why the two elections he chose are representative of what will happen in November, nor justification for why he divided the turnout by two.

He said he made the determination based on his "understanding of the electorate." At best, it was just an educated guess.

Pollsters make similar guesses all the time, usually hiding them behind a layer of technical jargon. The next time you see the results of a poll in the headlines, remember that behind that science, there is also some artistry.

Mike Tipping is a political junkie. He writes the Tipping Point blog about Maine politics at DownEast.com, his own blog at MainePolitics. net and works for the Maine People's Alliance and the Maine People's Resource Center. He's @miketipping on Twitter.

Were you interviewed for this story? If so, please fill out our accuracy form

Send question/comment to the editors




Further Discussion

Here at KJonline.com we value our readers and are committed to growing our community by encouraging you to add to the discussion. To ensure conscientious dialogue we have implemented a strict no-bullying policy. To participate, you must follow our Terms of Use.

Questions about the article? Add them below and we’ll try to answer them or do a follow-up post as soon as we can. Technical problems? Email them to us with an exact description of the problem. Make sure to include:
  • Type of computer or mobile device your are using
  • Exact operating system and browser you are viewing the site on (TIP: You can easily determine your operating system here.)