At first blush, polling in Virginia’s closely watched race for governor looks all over the map, from a 17-point lead for Democrat Ralph Northam to an 8-point advantage for Republican Ed Gillespie.
But what might appear to be a breakdown in pollster reliability may have a much simpler explanation: Not all surveys are conducted the same way, with variations in how questions are asked accounting for the large range of possible results in Virginia.
Story Continued Below
For decades, pollsters randomly dialed phone numbers to achieve probability sampling: the principle that every person has an equal chance of being selected to participate in the survey. As state election authorities and political parties became more sophisticated, however, campaign polling consultants began to call only those on the voter rolls — and, later, only those voters who regularly participated in elections.
In recent years, more public pollsters have embraced this private technique. Now, with a week until Virginians go to the polls — and with pollsters eager to rebuild their standing after mistaken predictions in the presidential election last year — the majority of public surveys in the governor’s race are conducted this way. The public polling conducted using lists of registered voters, a method that proponents say is much more consistent, suggests Northam is the slight favorite.
Twelve public polls of the race were conducted entirely within the month of October, and despite the wide range overall, the eight in which voters on registration lists were called fell between a 7-point lead for Northam and a 1-point edge for Gillespie. The three using what pollsters call random-digit dialing, or having a computer randomly generate a Virginia phone number, are outliers, with much wider margins.
The distinction between random-digit dialing and calling from voter lists — known in survey research as registration-based sampling — seems modest. But campaign pollsters have argued for years that using voter lists enhances accuracy and provides stability to the often-volatile world of public polling.
“I think there’s some valuable contributions from the world of campaign surveys,” said Scott Clement, polling manager for The Washington Post, which conducted a survey in Virginia in conjunction with George Mason University that, in part, used a voter list. “The use of the voter file has become more sophisticated over the past years.”
It’s not hard to see why using the voter list might be helpful — especially in off-year races like the Virginia gubernatorial contest, or in midterm elections, when fewer voters turn out. While 72 percent of registered voters in Virginia participated in last year’s presidential election, only 43 percent voted in the previous governor’s race, in 2013, and only 42 percent voted in the 2014 midterms.
One of the hardest things for pollsters to do is figure out which of those 42 or 43 percent will actually turn out. Taking another page from the campaign pollsters’ playbook, some public surveys contact only those with a proven record of voting. (Voter files don’t reveal which candidates were chosen, but they do state whether the voter submitted a ballot in a given election.)
In recent nonpresidential elections, those who didn’t vote tended to be Democrats, giving Republicans an advantage when the presidency isn’t on the ballot. A Pew Research Center study last month found that those who voted in 2016 and 2012 — but not in the 2014 midterms — were younger, more likely to be nonwhite and identified disproportionately as Democrats than those who voted in all three elections. Among consistent voters, 51 percent identified as Republicans or said they lean toward the GOP, while 47 percent were Democrats or leaned that way. But among those who voted in both 2016 and 2012, 58 percent were Democrats and 40 percent were Republicans.
Two academic pollsters, Christopher Newport University and Monmouth University, impose additional screens before voters can even get in the sample. For its final two polls before the election, Christopher Newport’s Wason Center is calling only those who have participated in at least two of the four previous statewide elections — unless they registered to vote after March 2016, in which case they must have voted in the 2016 election.
After pollsters call those reliable voters, in order for the interview to continue the voters must say they are thinking about the gubernatorial election, following news about the campaign and are likely to vote next week. Rachel Bitecofer, the assistant director of the Wason Center, said the pollster was keeping the precise model “private for proprietary reasons.” But the latest methodology statement stressed, “This screening results in a projected turnout of approximately 44 percent, which fits the trend of recent Virginia gubernatorial elections.”
The Monmouth screen is similar: Voters must have participated in two of the four previous elections, or registered since January 2016, to be included in the sample. Then there’s the likely-voter screen — as well as demographic modeling to ensure that the pool of voters surveyed matches the likely electorate.
“Historically, there’s a precedent for the demographics of voters who show up year after year in these elections. And you have that data from the voter rolls,” said Patrick Murray, the director of the Monmouth University Polling Institute. “We don’t know exactly who is going to show up in any given election, but we know the demographics of who shows up in these types of elections.”
Even the traditional Washington Post has experimented with using voter lists in some of its local polling. The most recent Washington Post/George Mason poll in Virginia included respondents contacted from both the voter rolls and random-digit dialing, or RDD.
“Over the last few years, we’ve used a combination of RDD and voter-list approaches in limited circumstances,” said Clement, The Post’s polling manager. “It’s far more efficient to reach voters through a voter list than it is through RDD, especially in low-turnout elections.”
One factor allowing increased use of voter lists: the improved quality of those lists. Previously, it was the political parties that did the work of matching telephone numbers and other public information to the names on the voter rolls. While campaign pollsters could use that data to build their samples, it wasn’t readily available for public pollsters.
“The kind of access that public pollsters had to voter lists wasn’t as good as the campaigns because we didn’t have the resources” to match known phone numbers and addresses to names on the voter files, Murray said. “Now, the major vendors have caught up to that. We actually buy very good voter lists without having to have an ‘in’ with one of the parties to try to finagle their list from them.”
But the quality of the voter file still varies state by state. North Dakota doesn’t have voter registration at all, for example.
“Depending on the state, voter lists are either very good or not very good at all,” said J. Ann Selzer, the Iowa-based pollster who conducts surveys for The Des Moines Register.
And just because something has happened in the past — that the demographics of off-year-election voters usually look similar — doesn’t mean it will happen again this year. Doug Schwartz, who runs the Quinnipiac University poll, defended random-digit dialing.
“RDD is still considered the gold standard,” Schwartz said. “All the major polls use RDD: Gallup, Pew, ABC News/Washington Post, CBS News, CNN.”
As for Quinnipiac’s latest poll, which shows Northam with a 17-point lead, Schwartz said, “We have a good track record, and we trust our numbers.”
Ultimately, the goal for all public pollsters doing pre-election surveys is to produce a more precise result. And in off-year elections in Virginia, that means making sure that the right voters are included in the survey and that those who won’t cast a ballot are excluded.
“Everyone came out of 2016 trying to figure out better ways to more accurately reflect who actually went to vote,” said Joe Lenski, the co-founder and executive vice president of Edison Research, which conducts the exit polls for major media organizations. “Whether that includes more voter-list sampling, more hybrid sampling, you’re definitely going to see more of that.”