The Cantor Prediction Is Part of a Pattern: GOP Pollsters Stink
The last few years have regrettably made the phrase “Republican pollster” less a job title than a punch line. From the 2012 election, where many in the GOP were stunned by the Obama campaign’s victory, to the 2013 closer-than-expected Virginia gubernatorial race, all the way to the present, Republican polling as of late has been a much-maligned sector of the campaign-industrial complex, and not without reason.
This past week, prominent Republican pollster John McLaughlin came under fire for his firm’s polling that showed House Majority Leader Eric Cantor up by 34 points over his insurgent opponent, David Brat, in his Congressional primary in Virginia. Cantor and his team, assuming a relatively easy victory was imminent, were blindsided last Tuesday in a double-digit loss.
While there’s definitely a need for accountability in terms of what campaign consultants are providing for their price tags, the piling-on of McLaughlin has been unseemly and dishearteningly personal. He is not the only pollster to get a race wrong, not by a long shot. For instance, in 2012, nonpartisan pollster Gallup whiffed and showed Romney ahead on Election Day nationwide. In 2013, most pollsters in the Virginia’s governor’s race underestimated just how close Virginia’s Republican Attorney General Ken Cuccinelli was in his bid to defeat Democrat Terry McAulliffe.
With fewer and fewer people picking up the phone to bother talking the pollsters in the first place, getting it right is hard.
But for a movement that has been caught off-guard by multiple election outcomes lately, the singular and personal focus on tearing down one consultant misses that this is a pattern. The more important question for Republicans to ask: Do we really know how to decide who is and isn’t a “likely voter”?
The way many pollsters do their research has been to call people off of a list of registered voters, calling only those who typically vote in elections like the one in question. (Pollsters define “typically” however they want.) If you’re an irregular or infrequent voter, you’re not getting called, on the assumption that you’re probably not going to hike out to your polling place this particular time if you never really have before. Apparently, both the McLaughlin poll as well as the other major public poll in the Cantor race used such a methodology, screening out “non-likely voters” on the front end.
The reason pollsters do this makes sense. Asking voters to tell you how likely they are to vote is a dangerous proposition, given that voters’ stated intentions don’t always match up with what they do. Yet using vote history alone to decide who is or isn’t a “likely voter” may have been the fatal flaw in the research.
Cantor hadn’t faced a primary opponent in 2008 or 2010, so there wasn’t a lot of voting history to draw on there. In 2012, around 33,000 people turned out for the GOP presidential primary in Cantor’s district, and around 47,000 people turned out in that year’s congressional primary, which was not strongly contested and in which Cantor cruised to victory.
This year? More than 65,000 votes were cast. If you were only calling people who had an established history of voting in Republican primaries in that district, you were missing tons of potential voters.
So if we know that our “likely voter” screens are a problem, that people aren’t great about assessing their own likelihood to vote, that looking at voting history in isolation can lead us to miss out when the electorate has completely changed, what is there to do?
The reality is that the electorate is changing. In 2012, many Republicans thought there was no way that turnout among groups like African-Americans or the young could possibly keep pace with 2008. They were wrong. The remarkably effective turnout machine of the Obama campaign proved they could continue to create the electorate they needed to win. And without a clear, evidence-based model for who is and isn’t likely to vote, new research suggests the ups and downs you see in your polling might just be sampling bias (PDF) rather than real movement of opinion.
In the 2013 race for governor in Virginia, one pollster did get a tough race right on the money—Democratic pollster Geoff Garin. He used previous vote history as just one of a variety of factors, built on sophisticated modeling, to assess someone’s likelihood of voting. Or take Harry Reid’s pollster Mark Mellman, who has said that he is less concerned with identifying if individuals are or aren’t “likely voters” but instead focuses on figuring out what the “likely electorate” will be and building his samples accordingly. In interviews this week, McLaughlin noted that “polls don’t predict turnout,” but with the use of good modeling and smarter analytics, it seems top Democrats might disagree.
This isn’t just about one political consultant or one high-profile miss. This is about the future of polling, analytics, and what we can (and can’t) know about the people who might cast votes. If polling data are supposed to tell us the story of what’s going on in an election, we need to make sure that story isn’t just a fairy tale.