Time series prediction performance measures provide a summary of the skill and capability of the forecast model that made the predictions.
There are many different performance measures to choose from. It can be confusing to know which measure to use and how to interpret the results.
In this tutorial, you will discover performance measures for evaluating time series forecasts with Python.
Time series generally focus on the prediction of real values, called regression problems. Therefore the performance measures in this tutorial will focus on methods for evaluating real-valued predictions.
After completing this tutorial, you will know:
- Basic measures of forecast performance, including residual forecast error and forecast bias.
- Time series forecast error calculations that have the same units as the expected outcomes such as mean absolute error.
- Widely used error calculations that punish large errors, such as mean squared error and root mean squared error.Time series prediction performance measures provide a summary of the
skill and capability of the forecast model that made the predictions.
There are many different performance measures to choose from. It can be confusing to know which measure to use and how to interpret the results.
In this tutorial, you will discover performance measures for evaluating time series forecasts with Python.
Time series generally focus on the prediction of real values, called regression problems. Therefore the performance measures in this tutorial will focus on methods for evaluating real-valued predictions.
After completing this tutorial, you will know:
- Basic measures of forecast performance, including residual forecast error and forecast bias.
- Time series forecast error calculations that have the same units as the expected outcomes such as mean absolute error.
- Widely used error calculations that punish large errors, such as mean squared error and root mean squared error.
Kick-start your project with my new book Time Series Forecasting With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Jun/2019: Fixed typo in forecast bias (thanks Francisco).
Forecast Error (or Residual Forecast Error)
The forecast error is calculated as the expected value minus the predicted value.
This is called the residual error of the prediction.
The forecast error can be calculated for each prediction, providing a time series of forecast errors.
The example below demonstrates how the forecast error can be calculated for a series of 5 predictions compared to 5 expected values. The example was contrived for demonstration purposes.
Running the example calculates the forecast error for each of the 5 predictions. The list of forecast errors is then printed.
The units of the forecast error are the same as the units of the prediction. A forecast error of zero indicates no error, or perfect skill for that forecast.
Stop learning Time Series Forecasting the slow way!
Take my free 7-day email course and discover how to get started (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Mean Forecast Error (or Forecast Bias)
Mean forecast error is calculated as the average of the forecast error values.
Forecast errors can be positive and negative. This means that when the average of these values is calculated, an ideal mean forecast error would be zero.
A mean forecast error value other than zero suggests a tendency of the model to over forecast (negative error) or under forecast (positive error). As such, the mean forecast error is also called the forecast bias.
The forecast error can be calculated directly as the mean of the forecast values. The example below demonstrates how the mean of the forecast errors can be calculated manually.
Running the example prints the mean forecast error, also known as the forecast bias.
In this case the result is negative, meaning that we have over forecast.
The units of the forecast bias are the same as the units of the predictions. A forecast bias of zero, or a very small number near zero, shows an unbiased model.
Mean Absolute Error
The mean absolute error, or MAE, is calculated as the average of the forecast error values, where all of the forecast error values are forced to be positive.
Forcing values to be positive is called making them absolute. This is signified by the absolute function abs() or shown mathematically as two pipe characters around the value: |value|.
Where abs() makes values positive, forecast_error is one or a sequence of forecast errors, and mean() calculates the average value.
We can use the mean_absolute_error() function from the scikit-learn library to calculate the mean absolute error for a list of predictions. The example below demonstrates this function.
Running the example calculates and prints the mean absolute error for a list of 5 expected and predicted values.
These error values are in the original units of the predicted values. A mean absolute error of zero indicates no error.
Mean Squared Error
The mean squared error, or MSE, is calculated as the average of the squared forecast error values. Squaring the forecast error values forces them to be positive; it also has the effect of putting more weight on large errors.
Very large or outlier forecast errors are squared, which in turn has the effect of dragging the mean of the squared forecast errors out resulting in a larger mean squared error score. In effect, the score gives worse performance to those models that make large wrong forecasts.
We can use the mean_squared_error() function from scikit-learn to calculate the mean squared error for a list of predictions. The example below demonstrates this function.
Running the example calculates and prints the mean squared error for a list of expected and predicted values.
The error values are in squared units of the predicted values. A mean squared error of zero indicates perfect skill, or no error.
Root Mean Squared Error
The mean squared error described above is in the squared units of the predictions.
It can be transformed back into the original units of the predictions by taking the square root of the mean squared error score. This is called the root mean squared error, or RMSE.
This can be calculated by using the sqrt() math function on the mean squared error calculated using the mean_squared_error() scikit-learn function.
Running the example calculates the root mean squared error.
The RMES error values are in the same units as the predictions. As with the mean squared error, an RMSE of zero indicates no error.
Further Reading
Below are some references for further reading on time series forecast error measures.
- Section 3.3 Measuring Predictive Accuracy, Practical Time Series Forecasting with R: A Hands-On Guide.
- Section 2.5 Evaluating Forecast Accuracy, Forecasting: principles and practice
- scikit-learn Metrics API
- Section 3.3.4. Regression metrics, scikit-learn API Guide
Summary
In this tutorial, you discovered a suite of 5 standard time series performance measures in Python.
Specifically, you learned:
- How to calculate forecast residual error and how to estimate the bias in a list of forecasts.
- How to calculate mean absolute forecast error to describe error in the same units as the predictions.
- How to calculate the widely used mean squared error and root mean squared error for forecasts.
Do you have any questions about time series forecast performance measures, or about this tutorial
Ask your questions in the comments below and I will do my best to answer.
No comments:
Post a Comment