representative sample

Four Different Pollsters, Four Different Results

Political polling can vary widely based on factors such as who is interviewed and the weighting pollsters give to likely voters, white voters, Hispanic voters and black voters.

Political polling can vary widely based on factors such as who is interviewed and the weighting pollsters give to likely voters, white voters, Hispanic voters and black voters.

Wariness of presidential political polls is warranted. The New York Times conducted an experiment that involved four different pollsters evaluating the same data set, which produced four different results.

Hillary Clinton received 42 percent support in two of the four polls and 39 percent and 40 percent in the other two. Donald Trump topped out at 41 percent in one poll, 39 percent in another and 38 percent in the other two. The Times “benchmark” poll had it 41 percent for Clinton and 40 percent for Trump.

The experiment highlights how polling, even by credible pollsters, can vary widely within the acceptable norms of polling. Critical variables include a representative sample, sampling error and basic assumptions. The latter accounted for the variance in the Times experiment that centered on the same 867 poll responses.

The most significant variables in the pollster analysis of response data: predicting the percentage of white, Hispanic and black likely voters in the November 8 general election.

When white voters reached 70 percent and Hispanic voters fell to 13 percent, Trump came out ahead by a percentage point.

When white voters were estimated at 68 percent and Hispanic voters at 15 percent, Clinton prevailed by 3 percentage points.

These choices weren’t random. Different pollsters relied on different models or sources of data. For example, the pollster who predicted the biggest lead for Clinton used self-reported intentions for likely voters, traditional weighting and Census data. The pollster who gave the nod to Trump relied on voter history to determine likely voters, a weighting model and voter files.

Their varying decisions on these questions add up to big differences in the result,” according to Nate Cohn in The Upshot report on polling. “In general, the pollsters who used vote history in the likely voter model showed a better result for Mr. Trump.”

Laid bare, the experiment shows “there really is a lot of flexibility for pollsters to make choices that generate a fundamentally different result. You can see why we say it’s best to average polls and to stop fretting so much about single polls.”

Tom Eiland is a CFM partner and the leader of the firm’s research practice. His work merges online research with client communications and engagement efforts, and he has a wide range of clients in the education, health care and transportation sectors. You can reach Tom at tome@cfmpdx.com.

The Right Tool for the Job

We live in the digital era, but that doesn't mean social media platforms such as Twitter can substitute for reliable public opinion instruments.

We live in the digital era, but that doesn't mean social media platforms such as Twitter can substitute for reliable public opinion instruments.

What's trending on Twitter isn't always an accurate reflection of public opinion. A large number of tweets may indicate public interest in a topic or event, but not a full picture of what the public thinks.

This isn't surprising. Twitter is a self-selected social media tool. The body of tweets doesn't need to reflect the demographics of a community, state or constituency. People who tweet on a topic may be more liberal, more conservative, richer or poorer than the public at large. Comments have value, but can't be rendered in quantitative terms the same as public opinion polling.

Quality public opinion polling is centered on a representative sample of who is interviewed. That assures the findings have credibility as a reliable reflection of the group being surveyed, with a slight margin of error.

The breadth and depth of the digital revolution may tempt some to see social media platforms as mirrors of public opinion. They certainly are reflections, but not ones you can totally rely upon to make decisions on messaging, trustworthy spokespeople and effective communication channels. A solid poll is a much better instrument for that.

Twitter conversations can be valuable to assess. For example, tweets can show the emotional charge in an issue or how an issue activates a particular group. The compressed format helps people distill what they feel to a few words, which in effect become sound bites. Tweets also can show the range of reactions.

In the world of measurement, there is room for evaluation of platforms such as Twitter. But it is important to recognize the right tool for the job. When you need an accurate picture of how a constituency views an issue, a poll with a representative sample is a much better choice.

Market Research and Social Media Insights

A debate is raging over whether scanning social media for insights is a substitute for disciplined market research. In our view, the answer is no. However, that doesn't mean social media analysis is without value.

Market research, whether for products, issues or elections, rests on testing a representative sample of a target audience. Social media analysis adds value by giving insight into outliers in a target audience, the people who didn't fit neatly in a box. 

The spectrum of social media users doesn't always equate to a representative sample, so any analysis will have a lower level of confidence than tradition market research techniques. But it provides insight into the "why" of some viewpoints, biases or preferences. Social media analysis serves, therefore, as a type of qualitative research.

Working in tandem, market research and social media insight-gathering can give a richer picture of a target audience. Surveys, for example, offer a perspective frozen in time when the survey was conducted. There also practical limits in reaching the ideal representative sample. People who only use cell phones are easily under-represented in telephone surveys.

An underlying trend behind this debate is the growing diversity in our population, which renders the concept of "representative sample" less meaningful. They may be fewer Mr. and Mrs. Average people. And even if you identified Mr. and Mrs. Average, many products and ideas appeal to Mr. and Ms. Not-Average.