What the Harris vs. Trump Polls Got Wrong
On Election Night, Donald Trump won a straightforward — if not overwhelming — victory over Kamala Harris in both the Electoral College and the national popular vote. We won’t have the final 2024-election results for at least a week, as ballots are still being counted. But the results we have so far can give us a preliminary idea of how accurate the polls were this year. Here, how the pollsters anticipated various outcomes.
National polls: Trump underestimated again
The national polling averages (representing a simulation of the national popular vote) all showed Harris with an advantage: She led by 1.2 percent per FiveThirtyEight, one percent per Nate Silver, 2 percent according to the Washington Post (which rounds numbers), and one percent according to the New York Times (which also rounds numbers). RealClearPolitics, which, unlike the other outlets, doesn’t weigh polls for accuracy or adjust them for partisan bias, showed Harris leading by 0.1 percent.
At the moment, Trump leads Harris in the national popular vote by 3.5 percent (51 to 47.5 percent). That margin is likely to decline given the historic tendency of late-counted mail and provisional ballots (particularly in California) to skew Democratic by a significant margin, a phenomenon called the “blue shift.” But even if Trump’s margin is shaved by a full percentage point (not a bad guess), the polls were off by 3 to 4 percent. According to a historical analysis from FiveThirtyEight, the average polling error in presidential elections from 1972 to 2020 has been 2.3 percent.
So it does not appear the 2024 national polling error is all that egregious, and it may be less than the 4 percent error in 2012 (when the polls underestimated Barack Obama’s vote) or 2020 (when they overestimated Joe Biden’s vote). But the fact remains that for the third consecutive election, the polls underestimated the national popular vote of the same Republican candidate. Does that say something about pollsters, about Republican voters, or specifically about Trump? The polling industry and its media sponsors will be debating this for the next few months. It is reasonably clear the “adjustments” pollsters took after 2020, which led some pundits to conclude they might have overadjusted this year and disguised a sizable impending Trump win, weren’t enough.
The most accurate 2024 national pollsters were a mix of those who seem to generally expect Republican performance to be stronger than others (e.g., AtlasIntel, whose final poll had Trump up by a point) and a few MSM polls that struck gold (e.g., The Wall Street Journal, whose final poll had Trump up by three points). The most inaccurate were probably NPR-Marist and Morning Consult, which consistently showed Harris with robust leads. But there really weren’t many wildly inaccurate polls like some of those showing Biden winning in a blowout in 2020.
Battleground-state polls: within the margin of error
Across averages from different outlets deploying different methodologies, battleground-state polls were unusually close. In the New York Times averages, for example, a one-point uniform swing at the last minute could have moved six states from one candidate to the other. About the only exception to the too-close-to-call pattern of battleground polling averages was a consensus that Trump had a solid lead in Arizona (2 to 3 percent per FiveThirtyEight, Nate Silver, the Times, the Post, and RealClearPolitics). But even there, all the averages were within the 3.5 percent standard margin of error for any poll.
Were the polls off more in particular battleground states? There are a lot of votes still out in Arizona — there and in Nevada (with some but fewer ballots still out) are where Trump has his biggest leads of five points (52 percent to 47 percent). Since Nevada was treated as very close in all the averages, it may wind up as the battleground state with the biggest polling error (a distinction Wisconsin held in both 2016 and 2020). But Nevada is famously a tough state to poll, and the final numbers may be a little less red. It’s hard to get too worked up about the failure of some polls and some averaging outlets to nail Trump wins by 3 percent (North Carolina), 2 percent (Georgia), and one percent (Michigan, Pennsylvania, and Wisconsin). These states really were too close to call, and the fact that they all swung together (as Silver in particular kept warning might happen) is actually a partial testament to the accuracy of polls that showed them all equally in play.
We will hear a lot going forward from traditionally Republican-leaning state pollsters (e.g., Trafalgar Group and InsiderAdvantage) that they nailed the election, though it’s hard to determine if they knew something others didn’t know or simply got closer owing to the broken-clock-being-right-twice-a-day principle. Overall, in fact (as reflected in all the very recent fretting about pollster “herding”), there wasn’t a massive amount of variation in battleground-state polls. In Arizona, “liberal” pollsters like Marist and Data for Progress showed Trump ahead down the stretch. And it’s also worth noting that the New York Times–Siena outlet, which regularly had some of the most Trump-favorable battleground polls throughout the cycle, wound up showing Harris leading in Georgia, Nevada, North Carolina, and Wisconsin in its final poll.
But the public pollster most embarrassed by the results was the highly reputed Ann Selzer, who created a huge stir with a late poll showing Harris leading by three points in Iowa. Trump won the state where Selzer has been a fixture for decades by 13 points.
Election forecasts: not that accurate
A number of prominent polling number crunchers also maintained election-prediction models. How did they do? Well, not great: Only Decision Desk HQ (generally Republican-leaning, though rigorous) predicted a Trump win and gave that outcome only a 54 percent probability. Silver and his old site FiveThirtyEight had it dead even with the slightest lean toward Harris. And The Economist gave Harris a 56 percent probability of winning.
All in all, the Trump-Harris contest was a mixed bag for pollsters and those who slice and dice their data. But nobody should feel misled. It was truly too close to call.