Our Vision

By Wojciech Gryc on February 22, 2024

As the world becomes multipolar, as we deal with new and evolving polycrises, as we battle global challenges like climate change, we will need better approaches to forecasting the future. How do we minimize uncertainty? How do build a better understanding of the complex interplay between news, events, governments, and non-state actors?

Today's forecasting approaches are not adequate, and large language models (LLMs) and other foundation models present new opportunities.

What's the problem with forecasting?

Human analysts can only analyze a limited amount of information, and they are slow. There is only so many reports, documents, and web pages a human analyst can read; there is a limited number of queries or statistical models a human analyst can write. Foundation Models do not have this problem — they can analyze as much structured and unstructured data as you give them.

It's not even clear if expert opinions are relevant compared to structuring analysis and regularly updated it. As explored in booked like Superforecasting, structuring qualitative analysis properly and regularly revisiting forecasts could yield better forecasts than those made my experts.

Many forecasts aren't quantitative in nature, even those that are predicting a numerical variable. Even if you are predicting numerical metrics, your forecasts are likely dependent on numerous qualitative or unstructured inputs — shocks like policy decisions, elections, military activity, and more. This makes it nearly impossible to have accurate forecasts without any sort of qualitative input.

Chaining forecasts at scale is not possible with humans. Many forecasts are interdependent on each other. The price of oil is dependent on countless variables — the value of the US dollar, OPEC price commitments, demand, reserves, and more. Manually building forecasts for input variables into other forecasts is impossible at scale.

Automating and scaling forecasts

Imagine a foundation model with access to the Internet and real-time access to news, able to write statistical code to analyze data, and a database of its past forecasts. Such a model should be able to quickly run analysis on new forecasts, keep them up to date, and regularly update subscribers/stakeholders when forecasts change.

The above is our first step in building a forecast system. It won't be perfect, but we believe it should do better than many humans.

Now, imagine further: a model that is designed and trained on past forecasts to better understand the logic and relationships between its observations, that can even write code to structure its own inter-related forecasts. Imagine a model like this building a perspective on human civilization and making predictions.

This is still very far off, but we believe such a model is possible, and we're working to make it happen.

Welcome to Emerging Trajectories.