Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Friday, 30 August 2024

How to Seed State for LSTMs for Time Series Forecasting in Python

 Long Short-Term Memory networks, or LSTMs, are a powerful type of recurrent neural network capable of learning long sequences of observations.

A promise of LSTMs is that they may be effective at time series forecasting, although the method is known to be difficult to configure and use for these purposes.

A key feature of LSTMs is that they maintain an internal state that can aid in the forecasting. This raises the question of how best to seed the state of a fit LSTM model prior to making a forecast.

In this tutorial, you will discover how to design, execute, and interpret the results from an experiment to explore whether it is better to seed the state of a fit LSTM from the training dataset or to use no prior state.

After completing this tutorial, you will know:

  • About the open question of how to best initialize the state of a fit LSTM for forecasting.
  • How to develop a robust test harness for evaluating LSTM models on univariate time series forecasting problems.
  • How to determine whether or not seeding the state of your LSTM prior to forecasting is a good idea on your time series forecasting problem.

    Tutorial Overview

    This tutorial is broken down into 5 parts; they are:

    1. Seeding LSTM State
    2. Shampoo Sales Dataset
    3. LSTM Model and Test Harness
    4. Code Listing
    5. Experimental Results

    Environment

    This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

    You must have Keras (version 2.0 or higher) installed with either the TensorFlow or Theano backend.

    The tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

    If you need help setting up your Python environment, see this post:

    Seeding LSTM State

    When using stateless LSTMs in Keras, you have fine-grained control over when the internal state of the model is cleared.

    This is achieved using the model.reset_states() function.

    When training a stateful LSTM, it is important to clear the state of the model between training epochs. This is so that the state built up during training over the epoch is commensurate with the sequence of observations in the epoch.

    Given that we have this fine-grained control, there is a question as to whether or not and how to initialize the state of the LSTM prior to making a forecast.

    The options are:

    • Reset state prior to forecasting.
    • Initialize state with the training datasets prior to forecasting.

    It is assumed that initializing the state of the model using the training data would be superior, but this needs to be confirmed with experimentation.

    Additionally, there may be multiple ways to seed this state; for example:

    • Complete a training epoch, including weight updates. For example, do not reset at the end of the last training epoch.
    • Complete a forecast of the training data.

    Generally, it is believed that both of these approaches would be somewhat equivalent. The latter of forecasting the training dataset is preferred because it does not require any modification to network weights and could be a repeatable procedure for an immutable network saved to file.

    In this tutorial, we will consider the difference between:

    • Forecasting a test dataset using a fit LSTM with no state (e.g. after a reset).
    • Forecasting a test dataset with a fit LSTM with state after having forecast the training dataset.

    Next, let’s take a look at a standard time series dataset we will use in this experiment.

    Shampoo Sales Dataset

    This dataset describes the monthly number of sales of shampoo over a 3-year period.

    The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

    The example below loads and creates a plot of the loaded dataset.

    Running the example loads the dataset as a Pandas Series and prints the first 5 rows.

    A line plot of the series is then created showing a clear increasing trend.

    Line Plot of Shampoo Sales Dataset

    Line Plot of Shampoo Sales Dataset

    Next, we will take a look at the LSTM configuration and test harness used in the experiment.

    Need help with Deep Learning for Time Series?

    Take my free 7-day email crash course now (with sample code).

    Click to sign-up and also get a free PDF Ebook version of the course.

    LSTM Model and Test Harness

    Data Split

    We will split the Shampoo Sales dataset into two parts: a training and a test set.

    The first two years of data will be taken for the training dataset and the remaining one year of data will be used for the test set.

    Models will be developed using the training dataset and will make predictions on the test dataset.

    Model Evaluation

    A rolling-forecast scenario will be used, also called walk-forward model validation.

    Each time step of the test dataset will be walked one at a time. A model will be used to make a forecast for the time step, then the actual expected value from the test set will be taken and made available to the model for the forecast on the next time step.

    This mimics a real-world scenario where new Shampoo Sales observations would be available each month and used in the forecasting of the following month.

    This will be simulated by the structure of the train and test datasets. We will make all of the forecasts in a one-shot method.

    All forecasts on the test dataset will be collected and an error score calculated to summarize the skill of the model. The root mean squared error (RMSE) will be used as it punishes large errors and results in a score that is in the same units as the forecast data, namely monthly shampoo sales.

    Data Preparation

    Before we can fit an LSTM model to the dataset, we must transform the data.

    The following three data transforms are performed on the dataset prior to fitting a model and making a forecast.

    1. Transform the time series data so that it is stationary. Specifically a lag=1 differencing to remove the increasing trend in the data.
    2. Transform the time series into a supervised learning problem. Specifically the organization of data into input and output patterns where the observation at the previous time step is used as an input to forecast the observation at the current timestep.
    3. Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model.

    LSTM Model

    An LSTM model configuration will be used that is skillful but untuned.

    This means that the model will be fit to the data and will be able to make meaningful forecasts, but will not be the optimal model for the dataset.

    The network topology consists of 1 input, a hidden layer with 4 units, and an output layer with 1 output value.

    The model will be fit for 3,000 epochs with a batch size of 4. The training dataset will be reduced to 20 observations after data preparation. This is so that the batch size evenly divides into both the training dataset and the test dataset (a requirement).

    Experimental Run

    Each scenario will be run 30 times.

    This means that 30 models will be created and evaluated for each scenario. The RMSE from each run will be collected providing a population of results that can be summarized using descriptive statistics like the mean and standard deviation.

    This is required because neural networks like the LSTM are influenced by their initial conditions (e.g. their initial random weights).

    The mean results for each scenario will allow us to interpret the average behavior of each scenario and how they compare.

    Let’s dive into the results.

    Code Listing

    Key modular behaviors were separated into functions for readability and testability, in case you would like to reuse this experimental setup.

    The specifics of the scenarios are described in the experiment() function.

    The complete code listing is provided below.

    Experimental Results

    Running the experiment takes some time or CPU or GPU hardware.

    The RMSE of each run is printed to give an idea of progress.

    At the end of the run, the summary statistics are calculated and printed for each scenario, including the mean and standard deviation.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    The complete output is listed below.

    A box and whisker plot is also created and saved to file, shown below.

    Box and Whisker Plot of LSTM with and Without Seed of State

    Box and Whisker Plot of LSTM with and Without Seed of State

    The results are surprising.

    They suggest better results by not seeding the state of the LSTM prior to forecasting the test dataset.

    This can be seen by the lower on average error of 146.600505 monthly shampoo sales compared to 186.432143 with seeding. It is much clearer in the box and whisker plot of the distributions.

    Perhaps the chosen model configuration resulted in a model too small to be dependent on the sequence and internal state to benefit from seeding prior to forecasting. Perhaps larger experiments are required.

    Extensions

    The surprising results open the door to further experimentation.

    • Evaluate the effect of clearing vs not clearing the state after the end of the last training epoch.
    • Evaluate the effect of predicting the training and test sets all at once vs one time step at a time.
    • Evaluate the effect of resetting and not resetting the LSTM state at the end of each epoch.

    Did you try one of these extensions? Share your findings in the comments below.

    Summary

    In this tutorial, you discovered how to experimentally determine the best way to seed the state of an LSTM model on a univariate time series forecasting problem.

    Specifically, you learned:

    • About the problem of seeding the state of an LSTM prior to forecasting and ways to address it.
    • How to develop a robust test harness for evaluating LSTM models for time series forecasting.
    • How to determine whether or not to seed the state of an LSTM model with the training data prior to forecasting.

    Did you run the experiment or run a modified version of the experiment?
    Share your results in the comments; I’d love to see them.

    Do you have any questions about this post?
    Ask your questions in the comment below and I will do my best to answer.

Connect broadband

A Gentle Introduction to Expected Value, Variance, and Covariance with NumPy

  Fundamental statistics are useful tools in applied machine learning for a better understanding your data. They are also the tools that pro...