Four Different Pollsters, Four Different Results

Political polling can vary widely based on factors such as who is interviewed and the weighting pollsters give to likely voters, white voters, Hispanic voters and black voters.

Political polling can vary widely based on factors such as who is interviewed and the weighting pollsters give to likely voters, white voters, Hispanic voters and black voters.

Wariness of presidential political polls is warranted. The New York Times conducted an experiment that involved four different pollsters evaluating the same data set, which produced four different results.

Hillary Clinton received 42 percent support in two of the four polls and 39 percent and 40 percent in the other two. Donald Trump topped out at 41 percent in one poll, 39 percent in another and 38 percent in the other two. The Times “benchmark” poll had it 41 percent for Clinton and 40 percent for Trump.

The experiment highlights how polling, even by credible pollsters, can vary widely within the acceptable norms of polling. Critical variables include a representative sample, sampling error and basic assumptions. The latter accounted for the variance in the Times experiment that centered on the same 867 poll responses.

The most significant variables in the pollster analysis of response data: predicting the percentage of white, Hispanic and black likely voters in the November 8 general election.

When white voters reached 70 percent and Hispanic voters fell to 13 percent, Trump came out ahead by a percentage point.

When white voters were estimated at 68 percent and Hispanic voters at 15 percent, Clinton prevailed by 3 percentage points.

These choices weren’t random. Different pollsters relied on different models or sources of data. For example, the pollster who predicted the biggest lead for Clinton used self-reported intentions for likely voters, traditional weighting and Census data. The pollster who gave the nod to Trump relied on voter history to determine likely voters, a weighting model and voter files.

Their varying decisions on these questions add up to big differences in the result,” according to Nate Cohn in The Upshot report on polling. “In general, the pollsters who used vote history in the likely voter model showed a better result for Mr. Trump.”

Laid bare, the experiment shows “there really is a lot of flexibility for pollsters to make choices that generate a fundamentally different result. You can see why we say it’s best to average polls and to stop fretting so much about single polls.”

Tom Eiland is a CFM partner and the leader of the firm’s research practice. His work merges online research with client communications and engagement efforts, and he has a wide range of clients in the education, health care and transportation sectors. You can reach Tom at tome@cfmpdx.com.