political polling

Polling for All Seasons, Tastes and Political Stripes

If the blizzard of polls overwhelms you, one solution is to tune into FiveThirtyEight, which summarizes recent polls, aggregates multiple polls to see trends and covers a wide range of topics from politics to sports to culture.

If the blizzard of polls overwhelms you, one solution is to tune into FiveThirtyEight, which summarizes recent polls, aggregates multiple polls to see trends and covers a wide range of topics from politics to sports to culture.

Election season means leaves change color and political polls fall like rain. Keeping track of all the polls and making sense out of them is beyond the capability of most of us. Thank goodness for FiveThirtyEight. 

FiveThirtyEight, named after the number of electors in the US Electoral College, launched in 2008 as a polling aggregation site. The idea was and remains that looking collectively at polls is more useful than focusing on a single poll, which can be influenced by the skill and methodology of an individual pollster. The fivethirtyeight.com website was acquired last April by ABC News.

In a weekly roundup of polling, called Pollapalooza, the site reports on the “Poll of the Week” and provides a quick reference and links to a wide range of political polls. This week’s Pollapalozza blog centers on polling that FiveThirtyEight shows support for President Trump flagging while support for Robert Mueller’s Russian interference investigation rising. 

The blog started with findings from a CNN poll that shows 61 percent of respondents believe the Mueller investigation is serious and should continue, up 6 points from a month ago. Poll findings indicate 72 percent of respondents believe Trump should testify under oath (+4 points since June) and 47 percent think Trump should be impeached (+5 points since June).

The latest poll by Quinnipiac, which has a slight tilt toward the right, produced complementary results. Respondents by a 55-32 margin said the Mueller investigation is fair, up 4 points from a Quinnipiac poll conducted a month ago.

FiveThirtyEight is the brainchild of  Nate Silver , who brings a statistician’s eye to everything from political races to baseball sabermetrics. He has steered his informative and sometimes provocative blog through transitions that included the New York Times, ESPN and now ABC News. His statistical approach to politics and other subject areas has drawn a large following and earned him the label of ‘disruptive’ of status quo thinking.

FiveThirtyEight is the brainchild of Nate Silver, who brings a statistician’s eye to everything from political races to baseball sabermetrics. He has steered his informative and sometimes provocative blog through transitions that included the New York Times, ESPN and now ABC News. His statistical approach to politics and other subject areas has drawn a large following and earned him the label of ‘disruptive’ of status quo thinking.

Numbers were different, but the margins were similar in a YouGov poll, which indicated respondents approved of the Mueller investigation by a 49 percent to 30 percent margin. 

If you tire of reading about the Russian investigation, Pollapalozza offers a guide to other recent research. For example: 

  • 58 percent of Americans want the senior Trump official who wrote an anonymous op-ed published by the New York Times to identify himself or herself. (CNN poll)

  • A plurality of respondents say it’s “not very important” or “not important at all” for a political candidate to have strong religious beliefs. (Associated Press-NORC Center for Public Affairs Research)

  • “Two-thirds of Americans rely on social media to get at least some of their news, but more than half of those people expect the news on social medial to be largely inaccurate.” (Pew Research Center)

  • “Among Americans who lost trust in media, 7 in 10 say that trust can be restored.” (Gallup and Knight Foundation)

If politics isn’t your thing, the FiveThirtyEight website serves up the latest news in sports, science & health, economics and culture.

In the culture category, the site’s blog, called Significant Digits, reported the results from a Washington Post survey of 50 cities that found police departments with lower caseloads of homicides have higher arrest rates while the opposite is true for cities with higher caseloads. “Major police departments that are successful at making arrests in homicides generally assign detectives fewer than five cases annually,” according to survey findings as reported in the newspaper under the headline, “Buried under bodies.”

The sports section is peppered with stories such as why the NFL, reputedly a passing league, doesn’t throw enough passes or a piece pitting “old-school stats” versus “fancy-pants analytics” in Major League Baseball.

FiveThirtyEight is pretty much like having your cake and political polling, too. It is worth some clicks.

 

Four Different Pollsters, Four Different Results

Political polling can vary widely based on factors such as who is interviewed and the weighting pollsters give to likely voters, white voters, Hispanic voters and black voters.

Political polling can vary widely based on factors such as who is interviewed and the weighting pollsters give to likely voters, white voters, Hispanic voters and black voters.

Wariness of presidential political polls is warranted. The New York Times conducted an experiment that involved four different pollsters evaluating the same data set, which produced four different results.

Hillary Clinton received 42 percent support in two of the four polls and 39 percent and 40 percent in the other two. Donald Trump topped out at 41 percent in one poll, 39 percent in another and 38 percent in the other two. The Times “benchmark” poll had it 41 percent for Clinton and 40 percent for Trump.

The experiment highlights how polling, even by credible pollsters, can vary widely within the acceptable norms of polling. Critical variables include a representative sample, sampling error and basic assumptions. The latter accounted for the variance in the Times experiment that centered on the same 867 poll responses.

The most significant variables in the pollster analysis of response data: predicting the percentage of white, Hispanic and black likely voters in the November 8 general election.

When white voters reached 70 percent and Hispanic voters fell to 13 percent, Trump came out ahead by a percentage point.

When white voters were estimated at 68 percent and Hispanic voters at 15 percent, Clinton prevailed by 3 percentage points.

These choices weren’t random. Different pollsters relied on different models or sources of data. For example, the pollster who predicted the biggest lead for Clinton used self-reported intentions for likely voters, traditional weighting and Census data. The pollster who gave the nod to Trump relied on voter history to determine likely voters, a weighting model and voter files.

Their varying decisions on these questions add up to big differences in the result,” according to Nate Cohn in The Upshot report on polling. “In general, the pollsters who used vote history in the likely voter model showed a better result for Mr. Trump.”

Laid bare, the experiment shows “there really is a lot of flexibility for pollsters to make choices that generate a fundamentally different result. You can see why we say it’s best to average polls and to stop fretting so much about single polls.”

Tom Eiland is a CFM partner and the leader of the firm’s research practice. His work merges online research with client communications and engagement efforts, and he has a wide range of clients in the education, health care and transportation sectors. You can reach Tom at tome@cfmpdx.com.

Crazy Political Polling Season (Again)

Election winners aren’t always the leaders in pre-vote polls, especially in the beginning of the crazy political polling season.

Election winners aren’t always the leaders in pre-vote polls, especially in the beginning of the crazy political polling season.

Donald Trump perpetually trumpets his lead in national polls. Bernie Sanders points to his surge from obscurity to a virtual tie in Iowa. Marco Rubio tells his supporters his showing in the Hawkeye State surpassed polling predictions.

Yes, it’s that crazy political polling season again.

Polls serve a purpose, but you have to take them, certainly at this point in the presidential campaign, with a grain of salt.

Trump outpolled rival GOP contender Ted Cruz in Iowa, but the ground game Cruz put together won the day in caucus sites. Were the polls wrong or did they just miscalculate the impact of Cruz staffers going door-to-door to nail down supporters who would brave winter cold to caucus? Turnout in elections is hard for polls to predict accurately.

Last-minute candidate surges can trick polls. They can be overstated or understated. Or missed, like Rubio’s in Iowa. Even weekly polls can be too slow to track fast-moving voter impressions.

How well candidates fare with key cohorts of voters can be missed, too. Hillary Clinton’s “upset” victory over Barack Obama in the 2008 New Hampshire primary was traced to polling samples that under-represented lower income voters who didn’t have or take the time to respond to telephone polls. The same problem can occur now if pollsters don’t include respondents only reachable on cell phones.

National polls can obscure state-level electoral leanings. Bernie Sanders may thrive in New Hampshire, which has a very liberal, white Democratic base and is next to his home state of Vermont. Hillary Clinton may have a clear advantage in South Carolina where African Americans dominate the Democratic base. Even though Cruz trailed Trump in national polls, he concentrated his efforts in Iowa on Christian evangelical voters who have a history of determining who wins the GOP vote there.

Polling techniques can have subtle influences on outcomes, which is why different polls taken at the same time with equivalent samples and sample sizes produce varying results. One of the factors in polling discrepancies is “tactical voting” or undecided voters declaring a preference they really don’t mean. When you have a lot of candidates, this factor grows in significance.

Then there is the confusion between polls and probabilities. Nate Silver of FiveThirtyEight earned a reputation – and skeptics – for basing candidate predictions on a different statistical analysis, not on the candidate's poll numbers. In a tweet following the Iowa caucuses Monday night, Silver said, “Polls in general elections = pretty good. Polls in primaries = much less accurate. Iowa caucus = especially tough.”

In a blog before the caucus, Silver said poll numbers don’t lie; they just don’t tell you the truth. “Could Marco Rubio win the Iowa caucuses despite not having led in a single poll here?” Silver wrote. "Sure. Rick Santorum did that exact thing four years ago.”

So if you are influenced by poll numbers in the early going of the presidential race, you might want to reconsider. The political polling crazy season is just beginning (again).

Political Polling Validity Becomes Shaky

Political polling is getting less reliable in predicting actual election outcomes. Reasons include the growing use of cell phones, reluctance to participate in telephone surveys and the rising cost of representative research samples.

Political polling is getting less reliable in predicting actual election outcomes. Reasons include the growing use of cell phones, reluctance to participate in telephone surveys and the rising cost of representative research samples.

Political polling doesn't seem to be as spot on as it used to be. Greater use of cell phones, wariness to participate in surveys and unrepresentative samples are among the reasons that political polls and election results turn out differently.

Cliff Zukin, a Rutgers political science professor and past president of the American Association for Public Opinion Research, writes in the New York Times that "polls and pollsters are going to be less reliable," so voters and the news media should beware.

"We are less rue how to conduct good survey research now than we were four years ago, and much less than eight years ago," says Zukin. "Don't look for too much help in what the polling aggregation sites may be offering. They, too, have been falling further off the track of late. It's not their fault. They are only as good as the raw material they have to work with."

Polling failures have been exposed in the most undetected 2014 mid-term election sweep in which Republicans captures both houses of Congress, Prime Minister Benjamin Netanyahu's solid victory in Israel and British Prime Minister David Cameron's relatively easy re-election win.

Cell phones are everywhere and increasingly have replaced landline telephones. Pollsters can find cell phone numbers, but federal law prevents calling them with automatic dialers. According to Zukin, "To complete a 1,000-person survey, it's not unusual to have to dial more than 20,000 random numbers, most of which do not go to working telephone numbers." That adds budget-busting cost to telephone surveys, which in turn lead to "compromises in sampling and interviewing."

Response rates to surveys have declined precipitously. In the 1970s, Zukin says an 80 percent response rate was considered acceptable. Now response rates have dipped below 10 percent. It is hard to draw a representative sample when large chunks of the population refuse to participate. Some cohorts, such as lower income household members, are more unlikely to participate than others, which can skew results. And it takes more calls to achieve a representative sample, which encourages corner-skipping.

Internet polling has emerged as a strong alternative. It is cheaper than telephone surveys and, at least or the moment, people seem more willing to participate, in part because they have more choice in when and how to respond.

But Internet use has built-in biases, too, Zukin notes. While 97 percent of people between the ages of 18 and 29 use the Internet, 40 percent of adults older than age 65 don't. "Almost all online election polling is done with non probability samples," Zukin says, which makes it impossible to calculate a margin of error. 

The most vexing polling problem is not a new one – determining who will actually vote. Public opinion polling is one thing; trying to predict the outcome of an actual election is another. Pollsters recognize that respondents will overstate their likelihood of actually voting, but have limited ability to identify who will and who won't cast ballots.

Non voting can occur for a mix of reasons – bad weather, lack of interest or political protest. Some registered voters simply forget to vote, especially in non-presidential elections. Less motivated voters vote in top-line races and leave the rest of their ballots blank, making it hard to predict the "turnout" for so-called down-ballot candidates and ballot measures.

Scott Keeter, who directs survey research at Pew Research, says the combination of these factors is shifting political polling "from science to art."

Political polls will continue to be magnets for media coverage, but readers should be aware that the results may not have as much validity as polling in the past.