polling methodology

Public Opinion Polls Stay Predictable in 2017 Election

Public opinion polling earned a black eye in the 2016 election cycle when most polls failed to predict a Donald Trump presidential victory. Few changes in polling techniques have been implemented in a handful of 2017 statewide elections and poll accuracy seems reconfirmed, at least for now. The X-factor of Trump wasn’t on the ballot.

Public opinion polling earned a black eye in the 2016 election cycle when most polls failed to predict a Donald Trump presidential victory. Few changes in polling techniques have been implemented in a handful of 2017 statewide elections and poll accuracy seems reconfirmed, at least for now. The X-factor of Trump wasn’t on the ballot.

Public opinion pollsters got a shiner in the 2016 election with off-base predictions about presidential and congressional elections. That may have signaled the need for major changes in technique, but that hasn’t happened, according to a story in The New York Times.

However, one unsuspecting change might right the ship. Pollsters are literally giving more weight in surveys to the level of education of respondents. Weighting respondents by education is far from easy. Candidates don’t perfectly align along educational attainment. In 2016, because of the profile of the presidential candidates, educational levels mattered. That may not be so in future elections.

For pollsters who think big methodological changes are unnecessary, Virginia may prove them right. Hillary Clinton polled five or six points ahead of Donald Trump in the 2016 election. She eventually carried Virginia by 5.3 percent. Polling in the 2017 Virginia gubernatorial election held on Tuesday showed Democrat Ralph Northam leading his GOP counterpart Ed Gillespie by as few as 3 percentage points.  With more than 80 percent of votes tallied, Northam posted nearly a 7 percent lead.

Political polling is not a perfected science. Conscientious pollsters continuously look for factors that can skew results, such as the sea-shift from landline phones to cell phones, and adjust to account for that shift. If you didn’t include cell phones in a sample, you would under-represent young voters and minorities and people who work more than one job.

Trump’s largely unexpected victory in 2016 confounded many pollsters and led to serious questioning of polling techniques. Did pollsters conduct late surveys to capture voters who decided at the last minute? How did pollsters compensate for respondents who intended to vote for Trump, but didn’t want to say so publicly? Did surveys fully take into account more remote areas, which went strongly in Trump’s direction? And how do you accurately predict turnout, not just overall, but by key constituencies that can determine whether one candidate wins or loses?

Challenges to getting accurate polling results may be intensifying as the electorate becomes more polarized, which is a hard factor to measure. While educational levels may be an obvious factor to include, figuring out how – and whether – it is a reliable indicator of voting behavior isn’t so obvious.

Politicians and news media put more stock in public opinion polling than voters. They are the ones that pay for it and, in varying degrees, expect polling results to reflect reality. Voters have no such expectations or fealty to polling results. If anything, polling results can incite small groups of voters to go to the polls or stay home, to vote one way or the other.

When all is said and done, polls don’t matter. Elections matter. Hillary Clinton led in the polls, but lost the election. Donald Trump sleeps in the White House. Clinton sleeps in hotels on her book tour explaining how she lost an election she thought she would win.

History may show 2016 is an aberration in polling perfection. Pre-election polls proved out in the gubernatorial elections today in New Jersey and Virginia. No curve balls, even though Gillespie in Virginia did his best to imitate the political bombast of Trump.

While the gubernatorial election outcome may give pause to Republicans standing for re-election in 2018, the predictability of public opinion polls in this cycle may reassure the buyers of political polling to keep investing.

Four Different Pollsters, Four Different Results

Political polling can vary widely based on factors such as who is interviewed and the weighting pollsters give to likely voters, white voters, Hispanic voters and black voters.

Political polling can vary widely based on factors such as who is interviewed and the weighting pollsters give to likely voters, white voters, Hispanic voters and black voters.

Wariness of presidential political polls is warranted. The New York Times conducted an experiment that involved four different pollsters evaluating the same data set, which produced four different results.

Hillary Clinton received 42 percent support in two of the four polls and 39 percent and 40 percent in the other two. Donald Trump topped out at 41 percent in one poll, 39 percent in another and 38 percent in the other two. The Times “benchmark” poll had it 41 percent for Clinton and 40 percent for Trump.

The experiment highlights how polling, even by credible pollsters, can vary widely within the acceptable norms of polling. Critical variables include a representative sample, sampling error and basic assumptions. The latter accounted for the variance in the Times experiment that centered on the same 867 poll responses.

The most significant variables in the pollster analysis of response data: predicting the percentage of white, Hispanic and black likely voters in the November 8 general election.

When white voters reached 70 percent and Hispanic voters fell to 13 percent, Trump came out ahead by a percentage point.

When white voters were estimated at 68 percent and Hispanic voters at 15 percent, Clinton prevailed by 3 percentage points.

These choices weren’t random. Different pollsters relied on different models or sources of data. For example, the pollster who predicted the biggest lead for Clinton used self-reported intentions for likely voters, traditional weighting and Census data. The pollster who gave the nod to Trump relied on voter history to determine likely voters, a weighting model and voter files.

Their varying decisions on these questions add up to big differences in the result,” according to Nate Cohn in The Upshot report on polling. “In general, the pollsters who used vote history in the likely voter model showed a better result for Mr. Trump.”

Laid bare, the experiment shows “there really is a lot of flexibility for pollsters to make choices that generate a fundamentally different result. You can see why we say it’s best to average polls and to stop fretting so much about single polls.”

Tom Eiland is a CFM partner and the leader of the firm’s research practice. His work merges online research with client communications and engagement efforts, and he has a wide range of clients in the education, health care and transportation sectors. You can reach Tom at tome@cfmpdx.com.