Nate Silver

New Book Says Polls Provide Indications, Not Predictions

Anthony Salvanto with CBS News has written a new book that explains what polling can and can’t do. It’s a good place to begin to become a savvy research consumer.

Anthony Salvanto with CBS News has written a new book that explains what polling can and can’t do. It’s a good place to begin to become a savvy research consumer.

Political polls give indications of voter attitudes, not predictions of election outcomes, says Anthony Salvanto in his new book, Where Did You Get This Number.

Salvanto, the director of elections and surveys for CBS News, says he wrote his book to explain how polling works after skepticism arose following the 2016 presidential election that polls suggested was a lock for Hillary Clinton. She did win the popular vote, but lost in states critical to a victory in the Electoral College. The polls were right and wrong at the same time.

In an interview on Face the Nation, Salvanto said he is often asked how national poll numbers are generated based on as few as 1,000 ten-minute telephone interviews. He explains representative samples can produce reliable results. Pollsters may not interview you, but they interview people like you.

A representative sample is just part of the best practices followed by professional pollsters. Clear, objective questions must be asked. Individual questions should test a single variable. Conclusions should be tempered by statistical validity. For example, a national poll with a 1,000-respondent sample may provide a valid national picture, but not a statistically valid picture of voters in Colorado.

Even the most scrupulous professional pollsters don’t always get the numbers exactly right. There often is a slight, but significant skew as a result of the specific methodology a pollster uses. For example,  failure to include a representative number of random sample calls to cell phone users could under-represent younger people, low-income families and minorities.

Nate Silver of FiveThirtyEight.com argues it is more reliable to look at groups of polls through the lens of a probability model.  He claims analyzing a pool of polls and weighting each one by their history of accuracy can burp out a more accurate polling results. Even then, Salvanto would say, it is not a prediction, just a reflection in time.

Then there are the polls that aren’t really polls. Push-polls ask questions, less to get an answer and more to deliver a message, often a negative one, about a political opponent. Cheap robopolls get lower than average response rates, which can skew results. Because they are prohibited by law from calling cell phone users randomly, they have a built-in bias.

The bottom line: Purchasers need to be smart consumers of research. Before looking at results, look at the sample so you know whose views are represented in the results. Understand the methodology being used and the statistical confidence it will yield. Know the benefits and limitations of different types of research, and certainly between qualitative and quantitative research. Collaborate with a pollster on the questions that need to be asked and let him or her advise you how to ask them fairly so you get usable responses, not just what you want to hear.

Salvanto’s book may be the place to start on your journey to understanding polling’s potential and limitations.

 

Crazy Political Polling Season (Again)

Election winners aren’t always the leaders in pre-vote polls, especially in the beginning of the crazy political polling season.

Election winners aren’t always the leaders in pre-vote polls, especially in the beginning of the crazy political polling season.

Donald Trump perpetually trumpets his lead in national polls. Bernie Sanders points to his surge from obscurity to a virtual tie in Iowa. Marco Rubio tells his supporters his showing in the Hawkeye State surpassed polling predictions.

Yes, it’s that crazy political polling season again.

Polls serve a purpose, but you have to take them, certainly at this point in the presidential campaign, with a grain of salt.

Trump outpolled rival GOP contender Ted Cruz in Iowa, but the ground game Cruz put together won the day in caucus sites. Were the polls wrong or did they just miscalculate the impact of Cruz staffers going door-to-door to nail down supporters who would brave winter cold to caucus? Turnout in elections is hard for polls to predict accurately.

Last-minute candidate surges can trick polls. They can be overstated or understated. Or missed, like Rubio’s in Iowa. Even weekly polls can be too slow to track fast-moving voter impressions.

How well candidates fare with key cohorts of voters can be missed, too. Hillary Clinton’s “upset” victory over Barack Obama in the 2008 New Hampshire primary was traced to polling samples that under-represented lower income voters who didn’t have or take the time to respond to telephone polls. The same problem can occur now if pollsters don’t include respondents only reachable on cell phones.

National polls can obscure state-level electoral leanings. Bernie Sanders may thrive in New Hampshire, which has a very liberal, white Democratic base and is next to his home state of Vermont. Hillary Clinton may have a clear advantage in South Carolina where African Americans dominate the Democratic base. Even though Cruz trailed Trump in national polls, he concentrated his efforts in Iowa on Christian evangelical voters who have a history of determining who wins the GOP vote there.

Polling techniques can have subtle influences on outcomes, which is why different polls taken at the same time with equivalent samples and sample sizes produce varying results. One of the factors in polling discrepancies is “tactical voting” or undecided voters declaring a preference they really don’t mean. When you have a lot of candidates, this factor grows in significance.

Then there is the confusion between polls and probabilities. Nate Silver of FiveThirtyEight earned a reputation – and skeptics – for basing candidate predictions on a different statistical analysis, not on the candidate's poll numbers. In a tweet following the Iowa caucuses Monday night, Silver said, “Polls in general elections = pretty good. Polls in primaries = much less accurate. Iowa caucus = especially tough.”

In a blog before the caucus, Silver said poll numbers don’t lie; they just don’t tell you the truth. “Could Marco Rubio win the Iowa caucuses despite not having led in a single poll here?” Silver wrote. "Sure. Rick Santorum did that exact thing four years ago.”

So if you are influenced by poll numbers in the early going of the presidential race, you might want to reconsider. The political polling crazy season is just beginning (again).

Polling Versus Probabilities

Donald Trump and Ben Carson still lead the national polls, but the real question is what are the true probabilities of ever being elected President.

Donald Trump and Ben Carson still lead the national polls, but the real question is what are the true probabilities of ever being elected President.

Republican presidential candidates, especially those left off the main stage podium for debates, have groused about the use of national polls to determine who is on and who is off.

National polls for some time now have placed Donald Trump and Ben Carson at the top of the GOP heap, positioning them center stage in debates and as prime candidates for earned media exposure. But do national polls a year out from the actual election really tell much of a story?

Nate Silver, the founder and editor of popular number-crunching blog site FiveThirtyEight, suggests that looking at probabilities would be more useful at this point. In an online chat on his site, Silver said there is basically a 50-50 chance for either the Democratic or Republican party to win an election following a two-term presidency. The key to projecting ahead, he says, is to evaluate what could sway the probabilities one way or another.

One influence identified in the online chat is the favorability rating of President Obama. His ranking has trended up to around 46 percent, which Silver says may mean that he won't have much tipping capacity either way.

The most significant factor, Silver says, is whether Republicans nominate a conservative (he names Ted Cruz, Ben Carson or Donald Trump). If one of them is the standard-bearer, Silver thinks Hillary Clinton, assuming she wins the Democratic presidential nomination, would have the "clear edge." That edge could widen if Republicans nominate a more centrist candidate and a conservative mounts an independent challenge.

Handicapping the electoral potential of candidates is something political polls don't do. They provide snapshots over time of how well a candidate is doing relative to his or her competitors.

The candidates, or at least most of them, recognize this. Rick Santorum, who is mired at near zero in national polls even though he won the Iowa caucus vote in 2012, told "Face the Nation" recently that polling fails to take into account residual support and the effectiveness of a ground game campaign strategy. Santorum also said national polls are meaningless and all that matters is how Iowans cast their votes on a cold winter night in January.

State-specific polls generally have reflected the dominance of Trump and Carson, but they aren't so firm that they couldn't change suddenly depending on who wins or does well in Iowa and the follow-on primary in New Hampshire.

The Iowa and New Hampshire votes are expected to accelerate the dropout rate in the still large GOP field of candidates. For example, if Mike Huckabee, who won the Iowa caucus vote in 2008, dropped out, who is best positioned to pick up his supporters in other states? Ditto for the departure of a major candidate like Jeb Bush.

A smaller cast of candidates would focus the choice Republican voters have to make, which could dramatically alter poll results.

Which brings us back to Silver and his reliance on probabilities, not polling. If you were going to the racetrack to pick a winner, you would agree with Silver and look at a horse's potential, not its popularity.

The Pseudo-Science of Bracketology

Bracketology, the process of picking the winners in the NCAA men's and women's national basketball tournaments, has attracted a lot of scientific attention — and a lot of dubious science.

University of Maryland quantum computing students have created a sophisticated, hard-to-fathom bracketology system that boils down to using a ytterbium ion like a coin flip to pick winners. Its ion coin flips predict the University of Pittsburgh, the eighth seed in the Eastern regional, to win the Big Dance. Unfortunately for the Panthers, they lost their first tournament game to ninth-seeded Wichita State.

Nate Silver, the legendary numbers cruncher in the world of politics, predicts Louisville has the highest probability of any of the 68 teams in the tournament to win at 22.7 percent. Indiana is next at 19.6 percent, followed by Florida at 12.7 percent, Kansas 7.5 percent and Number 1-ranked Gonzaga at 6.1 percent.

Silver has street cred because last year he predicted Kentucky would win the Big Dance — along with just about everybody else who follows college basketball and noticed the starting five were likely to be top picks in the NBA draft.

Because March Madness is a major national distraction that saps productivity from America's offices and factories, marketers sniff an opportunity. A number of brands have created their own bracketology to engage consumers. One investment analyst compared picking stocks to picking NCAA tournament winners, which may not have been the best of ideas.

The Final Poll, Before More Polls

Polling in this presidential contest has shown Barack Obama and Mitt Romney tied up more often than Houdini. 

Tuesday will bring the only poll that counts, and for many citizens not a moment too soon.

The zigzagging polls may have reflected the ups and downs of the Obama and Romney candidacies this fall. They also may have been hopeful interpretations of a margin of error or varying calculations of likely voters.

Whatever, in its final pre-election tracking poll, The Washington Post and ABC reported a slight 50-47 Obama lead over Romney, which could be the combined product of Romney's campaign peaking too soon and Hurricane Sandy thrusting President Obama into a national leadership role.

Nate Silver, who writes a blog for The New York Times about polls, suggested Obama heads into Tuesday with a "very modest lead."  But Silver noted that of the 12 national polls published over the weekend, three called the race dead even.

As the polls on the popular vote tightened in the last few weeks, attention turned with a vengeance to speculation over the Electoral College, which many today view as an Eighteenth Century relic and a 21st Century calamity-waiting-to-happen.

Pundits wondered endlessly about whether one candidate winning the popular vote and the other the necessary 270 electoral votes to claim the presidency — a vagary that has occurred before in American history. More angst was spilled on the prospect — which colored interactive maps illustrated — of the candidates winding up with an Electoral College tie, throwing the election to the GOP-controlled House.

What may be more useful to explore after the election is settled — Tuesday night, Wednesday or whenever — is how Americans wound up voting. We already have a clear picture of the stark divide between red and blue states, but what other schisms will the election bear out?

Pre-election polling indicates Obama enjoys stronger support from women than Romney, while the reverse is true for male voters. Obama does better with minorities; Romney does best with whites.

Online Research Works If Done Right

When conducted correctly, online research can accurately collect opinions among voters, the general population and consumers.

Recently, New York Times reporter Nate Silver took to task an online voter survey conducted in South Carolina. Before Citing a Poll, Read the Fine Print.

Yes, caveat emptor (let the buyer beware) is an appropriate cautionary note for any research. But what factors should decision-makers look for to determine if an online survey is valid or not.

CFM has found the following are key elements to conducting successful and accurate online studies.

  • Cast a wide net for emails. For general population surveys we collect emails addresses from a variety of valid sources, such as e-billing, e-newsletters, website registrations and existing online panels. Using multiple sources for email addresses makes the final email list representative of the community.
  • Diversify sources. If appropriate, use several online panels for community surveys. Each panel has its strengths and weaknesses. Using several sources helps avoid potential biases inherent in all commercial panels.