Getting Started with Forecasts API
Last updated
Was this helpful?
Last updated
Was this helpful?
The Forecasts API delivers fast, accurate, and scalable demand forecasts—powered by the real-world events that impact your business. Whether you’re starting from scratch or augmenting an existing model, our event-driven forecasting approach improves accuracy, unlocking significant ROI and cutting development time by months.
This API provides ready-to-use, event-optimized forecasts for your business, embedding the impact of sports, concerts, school holidays, and more directly into the forecast output. There’s no need to source or model event effects separately—we handle it for you.
Why Use It?
Event-aware by default — real-world events are built into every forecast
Industry-specific performance — designed for demand planners, revenue managers, and ops teams
Faster and more affordable than building your own system
PredictHQ’s Forecasts API is the only event-driven, fully automated forecasting solution available—built to get you to accurate forecasts without the complexity.
Forecasts API can be used anywhere you can run code (SageMaker, Snowflake, Databricks etc). The demo here is running in AWS SageMaker.
All code snippets in this guide assume the appropriate config has already been set:
To generate a forecast, you need to provide a daily time series with two columns:
date
The date of the observation, in YYYY-MM-DD
format (ISO 8601).
demand
The actual demand value for that date (e.g. units sold, bookings).
Requirements:
The data must be daily level
Provide at least 18 months of history for best results
Demand data will be rejected if it contains duplicated dates, missing values in the demand column, or non-numeric demand values
Example:
All forecast models are tied to a Saved Location so you can define the location once and create multiple models for it. For this example we're going to look at a theoretical restaurant located by the O2 Arena in London.
Our Suggested Radius API calculates the optimal area around your business to capture the events that will provide an impact.
After creating the Saved Location, we can re-use it across as many forecast models as we need.
During the training process the demand will be analyzed by Beam to determine what types of events impact your demand. This includes correlation and feature importance testing. The important features (from Features API) will be used when training your model and when forecasting.
Training usually takes a few minutes.
Use evaluation metrics such as MAPE to compare the model performance to other models, benchmarks, etc. In this example, the benchmark model had a MAPE of 8.96%.
Visualize the actual demand we uploaded as well as the forecasted demand we just retrieved:
After you have trained a model you can keep using that model in your ongoing workflow.
Every date in the forecast response includes a forecast
value—that’s the core output you’ll use. Optionally, you can request explainability to get additional context on why the model predicted that value for a given day. This includes a list of impactful real-world events (e.g. school holidays, concerts) that the model considered significant for that date. There are 2 key pieces of explainability that can be provided:
phq_explainability
- Top events the model has determined are impacting your demand on this date.
phq_features
- List of features (from Features API) that were identified through Beam's Feature Importance process as relevant to your demand, as well as their values. This field is only available to customers who have also purchase our Features product.
Here's an example truncated response for a single date showing phq_explainability
:
To get the most accurate results from the Forecasts API, your input data needs to reflect meaningful demand patterns over time. Here are key tips to improve forecast performance and reliability:
Include enough history: At least 18 months of daily demand helps the model learn seasonal and event-driven patterns.
Keep it consistent: Submit clean, continuous daily data—no smoothing, gaps, or placeholder values.
Avoid over-segmentation: Low-volume or highly granular series often perform worse. Aggregate where possible.
Watch out for tiny values: Very small but non-zero demand can distort percentage-based metrics like MAPE.
Exclude outliers if needed: Remove early COVID-19 disruptions or other non-repeating anomalies if they don’t reflect current demand.
The guide covers topics like:
What to do if your forecast accuracy is poor (e.g. noisy or low-volume data)
Why not enough history can reduce model performance
How overly fine-grained series can lead to weak signals
When to remove early COVID-19 disruptions from your dataset
Before tweaking your inputs or retrying, we strongly recommend reviewing the troubleshooting guide—it can save a lot of time and guesswork.
Before you get started make sure you have an .
to run an example yourself and adapt it to your needs.
Lower values indicate better accuracy. See the guide for help interpreting MAPE, MAE, and RMSE.
For more detailed recommendations, see the .
If your forecasts aren’t meeting expectations, don’t worry—there are several common reasons why accuracy might be lower than expected. We’ve put together a dedicated to help you identify and resolve these issues.
We also have a guide on (MAPE, MAE, RMSE) meaningfully.
- Full schema, endpoints and parameters
- Guide to interpreting MAPE, MAE and RMSE
- Common causes of low accuracy and how to fix them