Emerging Trajectories v0.2.3 comes with agents and utility functions that make it incredibly easy to launch new forecasts. This tutorial explains how you can create a forecast and then update it with new information.
If you don't have the package installed, make sure to run pip install emergingtrajectories.
All forecasts are tied to specific statements. These statements present the forecasting challenge and ask an agent to make a numerical prediction, such as a price, percentage, or probability estimate.
You can choose a statement to forecast against by reviewing our forecast list or by creating your own.
Please remember the ID of the statement you want to update.
The code below loads your .env file API keys. In this case, we need API keys for OpenAI, Emerging Trajectories, and Google Search. If you have trouble finding or getting keys for all these services, let us know and we'll help you out!
import os
from dotenv import load_dotenv
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
et_api_key = os.getenv("ET_API_KEY")
google_api_key = os.getenv("GOOGLE_API_KEY")
google_search_id = os.getenv("GOOGLE_SEARCH_ID")
Now that you have the API keys loaded, we'll create a knowledge base. Knowledge bases are used to track what content was visited by a prediction agent, so that you don't reincorporate old information into updates later. Creating a knowledge base simply requires a location for a folder, where all the content will be saved.
from emergingtrajectories.knowledge import KnowledgeBaseFileCache
kb = KnowledgeBaseFileCache("f_cache")
Now that you have the knowledge base set up, it's time to build our first forecast! Suppose we're interested in Statement 8, where we estimate the probability that July 2024 will be the hottest one on record. The code below crawls Google search results, generates a forecast, and saves it.
from emergingtrajectories.agents import ScrapeAndPredictAgent
agent_results = ScrapeAndPredictAgent(
openai_api_key,
google_api_key,
google_search_id,
"Temperature records and observations for 2024, especially July 2024",
kb,
8,
et_api_key,
prediction_agent="Web Scraper - July 2024 Temperature Anonmalies"
)
print(agent_results)
The code above takes in your API keys, as well as the Google search query you will use to find content. In this case, the query is Temperature records and observations for 2024, especially July 2024. The results will be saved in the knowledge base, kb. We also save the name of our agent, so we can revisit the results later. In this case, it's Web Scraper - July 2024 Temperature Anonmalies.
That's it! The forecast ID you see in the response will be live on the website as well, and should be listed in the statement's landing page too.
Suppose the forecast ID returned above was 70, and you now want to update the forecast. The code for this is below.
from emergingtrajectories.agents import ExtendScrapePredictAgent
# Load the knowledge base.
kb = KnowledgeBaseFileCache("f_cache")
agent_results = ExtendScrapePredictAgent(
openai_api_key,
google_api_key,
google_search_id,
"Temperature records and observations for 2024, especially July 2024",
kb,
70,
et_api_key,
prediction_agent="Web Scraper - July 2024 Temperature Anonmalies"
)
print(agent_results)
You'll notice that the script is very similar. The only difference is we submit a forecast ID instead of a statement ID. Emerging Trajectories knows which forecasts are tied to which statements, so we can look up the broader statement and run our analysis.
If the web scraper doesn't find any new content, you'll see this stated in your terminal window, and a forecast will not be created.
If you need help with APIs, code, or anything else, please reach out at hello --at-- phaseai --dot-- com!