Being able to forecast the various ways in which the future may play out is a key skill for managers, and indeed organizations. It’s perhaps no great surprise that many use technology to help them do the job.
Technologies such as AI and big data have become increasingly proficient at spotting trends in large data sets. Indeed, so proficient have they become, that the University of Toronto’s Ajay Agrawal, Joshua Gans, and Avi Goldfarb argue that lowering the cost of prediction will be the main benefit delivered by AI in the near term.
A sign of progress is made in a recent paper by researchers at the University of Cordoba, which chronicles their work in producing an accurate forecasting machine. They could provide accurate forecasts with less data than has been used in previous models.
“When you are dealing with a large volume of data, there are two solutions. You either increase computer performance, which is very expensive, or you reduce the quantity of information needed for the process to be done properly,” the authors explain.
Automating the job
It seems understandable that the natural next step is to fully automate the job of forecasting. Is AI as effective as human forecasters though? No one is better placed to answer that question than Wharton’s Philip Tetlock, whose work into “superforecasters” has led the way in understanding how the best forecasts are made.
In a recent study, he and his colleagues show that the latest large language models can produce comparable forecasts to those of the best humans. It’s a finding they believe opens the path to making forecasting more widely available, not only by being cheaper but also faster than humans.
“What we’re seeing here is a paradigm shift: AI predictions aren’t just matching human expertise — they’re changing how we think about forecasting entirely,” Tetlock says.
The silicon crowd
The researchers pooled the predictions made by a number of LLMs in what they believe is a practical method that can be relatively easily deployed by organizations to deliver high-quality forecasts without having a highly qualified (and expensive) team.
“This isn’t about replacing humans, however,” Tetlock says, “it’s about making predictions smarter, faster, and more accessible.”
The researchers explain that while individual AI models have often struggled to forecast effectively, when they’re combined you get a kind of “wisdom of silicon crowds” effect. Just as in the “wisdom of crowds”, groups of humans can average out our individual biases, the same can occur when we combine AI predictions. Each of the models brings a slightly different perspective, and combining them can provide a valuable consensus.
Man and machine
It’s worth noting that, in accordance with Kasparov’s law, the best results occurred when humans and AI worked together. This resulted in the AI predictions being improved by nearly 30%. In other words, the combination of human intuition and machine precision was most effective at producing reliable forecasts.
The researchers found that the most accurate forecasts came not from letting AI work alone but from combining its predictions with human judgment. This finding highlights a key point: while AI is improving fast, human input is still vital for getting things right.
Tetlock and his colleagues tested their methods in real-world settings by creating questions and scenarios that the AI hadn’t seen during training. This ensured the AI wasn’t just repeating memorized information.
The results were promising but showed clear limits. AI struggles when there’s a big gap between its training data and the events it’s trying to predict. Without up-to-date knowledge, its accuracy drops.
Dealing with overconfidence
Another problem is overconfidence. AI often gives high odds to outcomes that don’t match the evidence, which shows why human judgment is needed to keep its predictions in check.
One way of resolving this challenge is something the researchers refer to as “resolution”. This helps to distinguish between things that are likely and things that are less so. The goal is to ensure that higher probabilities are assigned to events that occur and lower probabilities are assigned to events that don’t.
“The key to resolution is confidence with clarity — bet big on what’s likely and back off where it’s not,” Tetlock says.
So, it seems like AI could have a role to play in helping us forecast more effectively, but Tetlock admits that we’re at the very earliest stage of this journey. They also accept that the humans they compared AI with were certainly highly educated, but they weren’t the elite “superforecasters” who thrive in his Good Judgment Project competitions. They remain the pinnacle of forecasting that even AI can’t beat.
Given that most organizations don’t have access to these superforecasters, however, there is perhaps a place for AI in broadening access to “good enough” forecasts.





