Four months ago to this day, Partisan-Gravity released its final election model with some predictions attached.
No doubt that this update makes for a pretty delayed post mortem. Even so, I feel it’s important to assess how things went with the model and to discuss the future. This is especially important as our political media environment incurred a meteoric blow today with ABC News dissolving the storied 538 brand and laying off all of its employees.
The loss of 538 is tragic and worrisome. For years it has provided a transparent and rigorous polling aggregator alongside thoughtful coverage and analysis. This website, among many others, relied heavily upon their pollster rankings and databases. Without 538, there will be an enormous hole to fill in scientific and empirical polling analysis. This, no doubt, will have downstream effects in our political discourse.
Cutting to the chase: Model Performance
The Partisan-Gravity model indicated that the Presidential election was a toss-up in its final run. You can see how its predictions changed overtime in the figure below:
With 10,000 simulations:
HARRIS WINS 5282 (52.82%)
TRUMP WINS 4578 (45.78%)
NO WINNER 140 (1.40%)

President Trump won the election against Vice President Harris. Even so, the model forecast suggested that such an outcome would happen, on average, nearly 46% of the time in a 10,000 simulation run.
It’s true that Vice President Harris was expected to win a slightly higher percentage of the time. This is, however, a difference that is marginal at best. A critic might suggest that the model is non-falsifiable. How can you say a model did anything but well when you can claim success if either candidate won?
That would be a fair critique if the data we utilized didn’t otherwise suggest a nearly 50/50 race. In fact, you can see that at various points in the few months prior to the election that the model was not afraid to demonstrate deviation away from a toss-up status. It never, however, deviated away from a small – if insignificant – Harris advantage.
This marginal advantage, however, was based upon a polling landscape that was far more kind to Harris’ candidacy than it was with President Biden’s. There were many polls that showed some advantages for Trump in certain swing states, but these were typically lower quality pollsters or, in some instances, newer pollsters that had issues with transparency.
These differences were accounted for with model decisions and pollster ratings that reduced the weight of uncertain and methodologically unsound pollsters. Those pollsters, and their results that were often good for President Trump, were therefore weaker in the overall polling averages than those of the industry heavyweights.
And yet, I would make those same decisions again.
The pollster rankings and other choices including assumptions related to convention bounces, average presidential polling errors, etc. are all calculated by looking at a series of elections rather than merely one observation. It would be unscientific to assume that suddenly each of these new pollsters that got it right were anything but lucky and biased. Maybe they are, in fact, doing something right and outperforming the legacy pollsters.
That would be a case that might develop over time, particularly as even the highest quality pollsters like the Des Moines Register recorded historic misses in their final polls. Until we have more elections, however, we won’t know if their performance is akin to skill or luck – or some combination of the two.
A careful reading of the model output then should have suggested a tight race that either candidate could win. The outcome, of course, was still fairly close. President Trump won the popular vote by about 1.5%. This is a smaller popular vote margin than in either 2016 or 2020. Of course, in 2026 President Trump won the election even while losing the popular vote.
Moreover, even as President Trump won Michigan, Wisconsin, North Carolina, Pennsylvania, North Carolina, Georgia, Nevada, and Arizona, these states were all quite close! The biggest miss in the model was in the state of Wisconsin where Vice President Harris was shown to have a 65% chance of winning. Harris lost the state by just under 1%.
All of this is to say that the model seems to have performed pretty well. If anything, it performed far better than my own judgement. Future model iterations will continue to assess pollster rankings and will have to consider whether convention bounces and other big events like debates matter, or if they’re just statistical noise.
My prediction was pretty bad
Even with the Partisan-Gravity model showing a dead heat, I jumped into the fray with my own prediction based upon my own priors.
In fact, I had the race almost entirely in reverse. I predicted that Vice President Harris would sweep most of the swing states except for Arizona. This would net her campaign 308 electoral votes – 38 more than the required 270 needed to win.

Instead, President Trump won 312 electoral college votes to Vice President Harris’ 226. His campaign won Nevada, Arizona, Georgia, North Carolina, Pennsylvania, Michigan, and Wisconsin. The races were all extremely close, but landed in his column.
That means that my prediction incorrectly called 6 states. In my defense, high quality polling truly showed a decent race for Harris in Wisconsin. In the others, it was a much tighter competition with small Harris leads – except for Arizona, which I correctly assumed Trump would win.
Where did those incorrect assumptions come from?
The final weeks leading up to the election seemed to generate momentum that some polls were seeming to capture, likely creating some kind of confirmation bias in my head. The backlash from Puerto Rican influencers, fallout from the final debate, polls from Iowa and Kansas showing a potentially cataclysmic decline of Republican support in key states, a relatively strong economy, etc. all suggested to me a race that had more going against Trump than for him.
Perhaps those were real, substantive observations. After all, the race was close. Maybe it would have been an even larger victory without any of those processes underway. Regardless, we all understand the world through our own biased, limited perspective. We are subject to echo chambers, elite capture, and a media economy built increasingly on algorithms designed to keep us engaged longer and poke holes into our sense of reality.
It’s true that my prediction was wrong, but it’s harder to really suggest that the observations that colored it were incorrect or unfounded. I imagine that if I were faced with similar events in the future that I could be swayed by observing events that feel substantial to me, especially if there are other data points that seem to correspond to my assumptions.
Even so, I imagine that I’ll be less inclined to deviate as readily from the model which, itself, has assumptions baked into its code. That is the case not just for the Partisan-Gravity model, but for any other model that makes assumptions about our world and the ways in which people act.
The midterm elections in 2026 therefore will be an excellent opportunity to keep honing this craft. We hope to see you then.