ChatGPT Filled Out An Extremely Boring NCAA Tournament Bracket
These days it feels like everything is about artificial intelligence, so it makes sense AI would become intertwined with March Madness.
VegasInsider.com used a prominent chatbot to try to predict how the 2026 NCAA Tournament will play out, and the results were… something.
Something most fans of the tournament probably hope turns out to be wrong.
AI loves the blue bloods
Take a look at this bracket ChatGPT filled out and something obvious will emerge quickly: There are almost no upsets.
In total, the chatbot picked only a few lower seeds to win, and all of them are very close seeds, too, so they aren’t exactly Earth-shattering.
In the first round, it’s No. 9 Utah State over No. 8 Villanova, No. 9 Saint Louis over No. 8 Georgia and No. 9 Iowa over No. 8 Clemson. (Congrats to Ohio State for being the only No. 8 seed to hold serve!)
Not much changes in the second round as the favorites all move on except for Nebraska and Kansas.
What do they have in common? Both are No. 4 seeds being “upset” by 5s (Vanderbilt and St. John’s, respectively).
OK, OK, but surely ChatGPT sees some fireworks for the final three rounds, right?
Not so much. The only other upset by seed the rest of the way it predicted was No. 3 Gonzaga over No. 2 Purdue in the West.
That would create an Elite Eight with four No. 1 seeds (Duke, Florida, Michigan and Arizona), three No. 2s (Houston, Iowa State and UConn) plus Gonzaga.
Thrilling, right?
It gets better: All four top seeds prevail, setting up Arizona vs. Michigan and Florida against Duke in the Final Four. That yields a final between the Wolverines and Gators with UM coming out on top for its second-ever national title.
READ MORE: Coach explains the magic of First Four's home arena
How did they get this result?
The folks at VegasInsider fed AI lots of information according to a press release announcing the results.
That included win-loss records, seeding, offensive and defensive performance, conference strength, player production, coaching experience and recent tournament history.
In all, over 1,700 datapoints were used, and they say the bot produced 71 pages and 10,000 words to justify the picks.
But was that really necessary?
Even someone with only the intelligence of a human can figure out AI would have a hard time sussing out potential upsets.
That’s because while AI operates in what is known and verifiable, upsets are exactly the opposite.
They are not logical, and that’s what makes them great.
READ MORE: 1 of the most memorable March Madness upsets happened 30 years ago
If the data could tell us what to expect with high confidence, well, what would be the point of playing the games?
(And I would have won my bracket challenge more than twice in the last 25 years!)
Then again, last season’s tournament was pretty “chalky” with all four No. 1 seeds making the Final Four for just the second time, and many (human) experts seem to be expecting a similar outcome this year.
So maybe AI is ahead of the curve — as boring as that might be.
