Understanding Forecast Accuracy Metrics
The Forecasts API provides three common metrics to help you evaluate how well the model is performing. Each metric highlights a different aspect of error—use them together to get a complete picture.
MAPE – Mean Absolute Percentage Error
MAPE quantifies the average magnitude of forecast error as a percentage of actual values. It’s easy to interpret and especially useful for comparing forecast performance across products, locations, or scales. However, it can become distorted when actual values are close to zero.
Best used when actual values are consistently non-zero.
Good for comparing forecasts across series with different scales.
Can produce very large values when actual demand is low, even if the forecast is reasonable.
In our expanding window evaluation framework, the MAPE is calculated not just from a single forecast, but across multiple rolling forecast iterations—this simulates how the model would perform in a real-world setting where forecasts are generated repeatedly over time, and ensures the reported accuracy reflects consistent performance, not just a one-off result.
MAE – Mean Absolute Error
MAE measures the average absolute difference between predicted and actual values, expressed in the same units as your demand (e.g. units sold, bookings). It treats all errors equally, making it a simple and intuitive way to understand overall forecast accuracy.
Easy to interpret and compare against your typical daily volumes.
Treats all errors the same—no extra weight on large deviations.
Best used when actual values are on a consistent scale across time or series.
RMSE – Root Mean Squared Error
RMSE measures the square root of the average squared differences between predicted and actual values, placing greater weight on larger errors.
Penalizes larger errors more than MAE does.
Helpful for identifying volatility or occasional large misses.
Useful when high-impact outliers matter more than average performance.
Last updated
Was this helpful?