I had the idea for FiveThirtyEight (which refers to the number of votes in the Electoral College) while waiting out a delayed flight at Louis Armstrong New Orleans International Airport in February 2008. For some reason—possibly the Cajun martinis had stirred something up—it suddenly seemed obvious that someone needed to build a website that predicted how well Hillary Clinton and Barack Obama, then still in heated contention for the Democratic nomination, would fare against John McCain.
My interest in electoral politics had begun slightly earlier, however—and had been mostly the result of frustration rather than any affection for the political process. I had carefully monitored the Congress’s attempt to ban Internet poker in 2006, which was then one of my main sources of income. I found political coverage wanting even as compared with something like sports, where the “Moneyball revolution” had significantly improved analysis.
During the run-up to the primary I found myself watching more and more political TV, mostly MSNBC and CNN and Fox News. A lot of the coverage was vapid. Despite the election being many months away, commentary focused on the inevitability of Clinton’s nomination, ignoring the uncertainty intrinsic to such early polls. There seemed to be too much focus on Clinton’s gender and Obama’s race. There was an obsession with determining which candidate had “won the day” by making some clever quip at a press conference or getting some no-name senator to endorse them—things that 99 percent of voters did not care about.
Political news, and especially the important news that really affects the campaign, proceeds at an irregular pace. But news coverage is produced every day. Most of it is filler, packaged in the form of stories that are designed to obscure its unimportance. Not only does political coverage often lose the signal—it frequently accentuates the noise. If there are a number of polls in a state that show the Republican ahead, it won’t make news when another one says the same thing. But if a new poll comes out showing the Democrat with the lead, it will grab headlines—even though the poll is probably an outlier and won’t predict the outcome accurately.
The bar set by the competition, in other words, was invitingly low. Someone could look like a genius simply by doing some fairly basic research into what really has predictive power in a political campaign. So I began blogging at the website Daily Kos, posting detailed and data-driven analyses on issues like polls and fundraising numbers. I studied which polling firms had been most accurate in the past, and how much winning one state—Iowa, for instance—tended to shift the numbers in another. The articles quickly gained a following, even though the commentary at sites like Daily Kos is usually more qualitative (and partisan) than quantitative. In March 2008, I spun my analysis out to my own website, FiveThirtyEight, which sought to make predictions about the general election.
The FiveThirtyEight forecasting model started out pretty simple—basically, it took an average of polls but weighted them according to their past accuracy—then gradually became more intricate. But it abided by three broad principles:
1. Think probabilistically. Almost all the forecast that I publish, in politics and other fields, are probabilistic. Instead of spitting out just one number and claiming to know exactly what will happen, I instead articulate a range of possible outcomes.
2. Today’s forecast is the first forecast of the rest of your life. Another misconception is that a good prediction shouldn’t change. Ultimately, the right attitude is that you should make the best forecast possible today—regardless of what you said last week, last month, or last year.
The bar set by the competition was invitingly low. Someone could look like a genius simply by doing some fairly basic research into what really has predictive power in a political campaign.
3. Look for consensus. Quite a lot of evidence suggests that aggregate or group forecasts are more accurate than individual ones, often somewhere between 15 and 20 percent more accurate depending on the discipline.
So I adopted some different habits from the pundits I saw on TV. I learned how to express—and quantify—the uncertainty in my predictions. I updated my forecast as facts and circumstances changed. I recognized that there is wisdom in seeing the world from a different viewpoint. The more you are willing to do these things, the more capable you will be of evaluating a wide variety of information without abusing it.
Excerpted and adapted from The Signal and the Noise: Why So Many Predictions Fail-but Some Don't by Nate Silver. Copyright © 2012 by Nate Silver. Used by permission of Penguin Press HC. All rights reserved.