Identifying Top Forecasters
a new series on methods for separating the best from the rest
We live in an increasingly complex and rapidly evolving world. Whether it's the outcome of an election, the path of a hurricane, or the capabilities of next-gen AI, the stakes are high, and finding information sources with the right incentives to provide sound judgment and insights into what the future might hold is becoming evermore valuable for preparing for whatever comes next. This introductory post lays out the groundwork for an ongoing series aimed at identifying and understanding top forecasters, examining the challenges, and improving the craft of forecasting.
Unlike profit-driven news reporting where predictable narratives sell, keeping track of forecast outcomes incentivizes forecasters to reduce their biases and try to understand the world as it actually is. Forecasters can provide a valuable service to society, particularly if those in power listen, but, even if forecasting never influences policy, it can still help those who are interested better prepare for an uncertain future.
In light of this, the identification of top forecasters should be imperative. In theory, we should be able to simply trust the forecasters with the best track records but, in practice, there are important caveats to consider. How do we distinguish the best from the rest?
What Lies Ahead
Over the course of this series, we’ll look into the nuances of forecasting by exploring various facets of the problem. Including the following areas:
How do we handle scenarios where different scoring systems yield different top forecasters for the same set of questions? How do we compare forecasters who operate on different platforms, each with its own scoring systems and types of questions?
Luck vs Skill
How can we differentiate between a forecaster who has genuine expertise and one who just got lucky?
Breadth vs Depth
Should a forecaster who makes relatively few but consistently outstanding forecasts be rated more highly than one who makes many forecasts that are only marginally better than the crowd?
What if a forecaster excels in one area—say, political forecasts—but is mediocre or even poor in others like financial markets? What about domain experts who aren’t forecasters?
What types of questions do crowds usually get right? Is it easier to outperform the crowd in specific domains? How do we correct for any systemic biases within a group of forecasters or the platform they are using?
How do we account for the difficulty level of different timeframes? Is a successful long-term forecast more valuable than a short-term one?
How much weight should be given to a forecaster's methods and reasoning?
By the end of this series, we'll develop a well-rounded model of the factors that can set top forecasters apart and discuss tools that can improve the accuracy and reliability of future forecasts.
The ability to anticipate what lies ahead will be one of our most valuable assets, and taking the time to understand and identify superior forecasting is an investment worth making.