Friday 10 May 2024

Understanding Stateful LSTM Recurrent Neural Networks in Python with Keras

A powerful and popular recurrent neural network is the long short-term model network or LSTM.

It is widely used because the architecture overcomes the vanishing and exposing gradient problem that plagues all recurrent neural networks, allowing very large and very deep networks to be created.

Like other recurrent neural networks, LSTM networks maintain state, and the specifics of how this is implemented in the Keras framework can be confusing.

In this post, you will discover exactly how state is maintained in LSTM networks by the Keras deep learning library.

After reading this post, you will know:

  • How to develop a naive LSTM network for a sequence prediction problem
  • How to carefully manage state through batches and features with an LSTM network
  • How to manually manage state in an LSTM network for stateful predictionA powerful and popular recurrent neural network is the long short-term model network or LSTM.

    It is widely used because the architecture overcomes the vanishing and exposing gradient problem that plagues all recurrent neural networks, allowing very large and very deep networks to be created.

    Like other recurrent neural networks, LSTM networks maintain state, and the specifics of how this is implemented in the Keras framework can be confusing.

    In this post, you will discover exactly how state is maintained in LSTM networks by the Keras deep learning library.

    After reading this post, you will know:

    • How to develop a naive LSTM network for a sequence prediction problem
    • How to carefully manage state through batches and features with an LSTM network
    • How to manually manage state in an LSTM network for stateful prediction

    Kick-start your project with my new book Deep Learning With Python, including step-by-step tutorials and the Python source code files for all examples.

    Let’s get started.

    • Jul/2016: First published
    • Update Mar/2017: Updated example for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0
    • Update Aug/2018: Updated examples for Python 3, updated stateful example to get 100% accuracy
    • Update Mar/2019: Fixed typo in the stateful example
    • Update Jul/2022: Updated for TensorFlow 2.x API
    Understanding Stateful LSTM Recurrent Neural Networks in Python with Keras

    Understanding stateful LSTM recurrent neural networks in Python with Keras
    Photo by Martin Abegglen, some rights reserved.

    Problem Description: Learn the Alphabet

    In this tutorial, you will develop and contrast a number of different LSTM recurrent neural network models.

    The context of these comparisons will be a simple sequence prediction problem of learning the alphabet. That is, given a letter of the alphabet, it will predict the next letter of the alphabet.

    This is a simple sequence prediction problem that, once understood, can be generalized to other sequence prediction problems like time series prediction and sequence classification.

    Let’s prepare the problem with some Python code you can reuse from example to example.

    First, let’s import all of the classes and functions you will use in this tutorial.

    Next, you can seed the random number generator to ensure that the results are the same each time the code is executed.

    You can now define your dataset, the alphabet. You define the alphabet in uppercase characters for readability.

    Neural networks model numbers, so you need to map the letters of the alphabet to integer values. You can do this easily by creating a dictionary (map) of the letter index to the character. You can also create a reverse lookup for converting predictions back into characters to be used later.

    Now, You need to create your input and output pairs on which to train your neural network. You can do this by defining an input sequence length, then reading sequences from the input alphabet sequence.

    For example, use an input length of 1. Starting at the beginning of the raw input data, you can read off the first letter “A” and the next letter as the prediction “B.” You move along one character and repeat until You reach a prediction of “Z.”

    Also, print out the input pairs for sanity checking.

    Running the code to this point will produce the following output, summarizing input sequences of length 1 and a single output character.

    You need to reshape the NumPy array into a format expected by the LSTM networks, specifically [samples, time steps, features].

    Once reshaped, you can then normalize the input integers to the range 0-to-1, the range of the sigmoid activation functions used by the LSTM network.

    Finally, you can think of this problem as a sequence classification task, where each of the 26 letters represents a different class. As such, you can convert the output (y) to a one-hot encoding using the Keras built-in function to_categorical().

    You are now ready to fit different LSTM models.

    Need help with Deep Learning in Python?

    Take my free 2-week email course and discover MLPs, CNNs and LSTMs (with code).

    Click to sign-up now and also get a free PDF Ebook version of the course.

    Naive LSTM for Learning One-Char to One-Char Mapping

    Let’s start by designing a simple LSTM to learn how to predict the next character in the alphabet, given the context of just one character.

    You will frame the problem as a random collection of one-letter input to one-letter output pairs. As you will see, this is a problematic framing of the problem for the LSTM to learn.

    Let’s define an LSTM network with 32 units and an output layer with a softmax activation function for making predictions. Because this is a multi-class classification problem, you can use the log loss function (called “categorical_crossentropy” in Keras) and optimize the network using the ADAM optimization function.

    The model is fit over 500 epochs with a batch size of 1.

    After you fit the model, you can evaluate and summarize the performance of the entire training dataset.

    You can then re-run the training data through the network and generate predictions, converting both the input and output pairs back into their original character format to get a visual idea of how well the network learned the problem.

    The entire code listing is provided below for completeness.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running this example produces the following output.

    You can see that this problem is indeed difficult for the network to learn.

    The reason is that the poor LSTM units do not have any context to work with. Each input-output pattern is shown to the network in a random order, and the state of the network is reset after each pattern (each batch where each batch contains one pattern).

    This is an abuse of the LSTM network architecture, treating it like a standard multilayer perceptron.

    Next, let’s try a different framing of the problem to provide more sequence to the network from which to learn.

    Naive LSTM for a Three-Char Feature Window to One-Char Mapping

    A popular approach to adding more context to data for multilayer perceptrons is to use the window method.

    This is where previous steps in the sequence are provided as additional input features to the network. You can try the same trick to provide more context to the LSTM network.

    Here, you will increase the sequence length from 1 to 3, for example:

    This creates training patterns like this:

    Each element in the sequence is then provided as a new input feature to the network. This requires a modification of how the input sequences are reshaped in the data preparation step:

    It also requires modifying how the sample patterns are reshaped when demonstrating predictions from the model.

    The entire code listing is provided below for completeness.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running this example provides the following output.

    You can see a slight lift in performance that may or may not be real. This is a simple problem that you were still not able to learn with LSTMs even with the window method.

    Again, this is a misuse of the LSTM network by a poor framing of the problem. Indeed, the sequences of letters are time steps of one feature rather than one time step of separate features. You have given more context to the network but not more sequence as expected.

    In the next section, you will give more context to the network in the form of time steps.

    Naive LSTM for a Three-Char Time Step Window to One-Char Mapping

    In Keras, the intended use of LSTMs is to provide context in the form of time steps, rather than windowed features like with other network types.

    You can take your first example and simply change the sequence length from 1 to 3.

    Again, this creates input-output pairs that look like this:

    The difference is that the reshaping of the input data takes the sequence as a time step sequence of one feature rather than a single time step of multiple features.

    This is the correct intended use of providing sequence context to your LSTM in Keras. The full code example is provided below for completeness.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running this example provides the following output.

    You can see that the model learns the problem perfectly as evidenced by the model evaluation and the example predictions.

    But it has learned a simpler problem. Specifically, it has learned to predict the next letter from a sequence of three letters in the alphabet. It can be shown any random sequence of three letters from the alphabet and predict the next letter.

    It can not actually enumerate the alphabet. It’s possible a larger enough multilayer perception network might be able to learn the same mapping using the window method.

    The LSTM networks are stateful. They should be able to learn the whole alphabet sequence, but by default, the Keras implementation resets the network state after each training batch.

    LSTM State within a Batch

    The Keras implementation of LSTMs resets the state of the network after each batch.

    This suggests that if you had a batch size large enough to hold all input patterns and if all the input patterns were ordered sequentially, the LSTM could use the context of the sequence within the batch to better learn the sequence.

    You can demonstrate this easily by modifying the first example for learning a one-to-one mapping and increasing the batch size from 1 to the size of the training dataset.

    Additionally, Keras shuffles the training dataset before each training epoch. To ensure the training data patterns remain sequential, you can disable this shuffling.

    The network will learn the mapping of characters using the within-batch sequence, but this context will not be available to the network when making predictions. You can evaluate both the ability of the network to make predictions randomly and in sequence.

    The full code example is provided below for completeness.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running the example provides the following output.

    As expected, the network is able to use the within-sequence context to learn the alphabet, achieving 100% accuracy on the training data.

    Importantly, the network can make accurate predictions for the next letter in the alphabet for randomly selected characters. Very impressive.

    Stateful LSTM for a One-Char to One-Char Mapping

    You have seen that you can break up the raw data into fixed-size sequences and that this representation can be learned by the LSTM but only to learn random mappings of 3 characters to 1 character.

    You have also seen that you can pervert the batch size to offer more sequence to the network, but only during training.

    Ideally, you want to expose the network to the entire sequence and let it learn the inter-dependencies rather than you defining those dependencies explicitly in the framing of the problem.

    You can do this in Keras by making the LSTM layers stateful and manually resetting the state of the network at the end of the epoch, which is also the end of the training sequence.

    This is truly how the LSTM networks are intended to be used.

    You first need to define your LSTM layer as stateful. In so doing, you must explicitly specify the batch size as a dimension on the input shape. This also means that when you evaluate the network or make predictions, you must also specify and adhere to this same batch size. This is not a problem now as you are using a batch size of 1. This could introduce difficulties when making predictions when the batch size is not one, as predictions will need to be made in the batch and the sequence.

    An important difference in training the stateful LSTM is that you manually train it one epoch at a time and reset the state after each epoch. You can do this in a for loop. Again, do not shuffle the input, preserving the sequence in which the input training data was created.

    As mentioned, you specify the batch size when evaluating the performance of the network on the entire training dataset.

    Finally, you can demonstrate that the network has indeed learned the entire alphabet. You can seed it with the first letter “A,” request a prediction, feed the prediction back in as an input, and repeat the process all the way to “Z.”

    You can also see if the network can make predictions starting from an arbitrary letter.

    The entire code listing is provided below for completeness.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running the example provides the following output.

    You can see that the network has memorized the entire alphabet perfectly. It used the context of the samples themselves and learned whatever dependency it needed to predict the next character in the sequence.

    You can also see that if you seed the network with the first letter, it can correctly rattle off the rest of the alphabet.

    You can also see that it has only learned the full alphabet sequence and from a cold start. When asked to predict the next letter from “K,” it predicts “B” and falls back into regurgitating the entire alphabet.

    To truly predict “K,” the state of the network would need to be warmed up and iteratively fed the letters from “A” to “J.” This reveals that you could achieve the same effect with a “stateless” LSTM by preparing training data like this:

    Here, the input sequence is fixed at 25 (a-to-y to predict z), and patterns are prefixed with zero padding.

    Finally, this raises the question of training an LSTM network using variable length input sequences to predict the next character.

    LSTM with Variable-Length Input to One-Char Output

    In the previous section, you discovered that the Keras “stateful” LSTM was really only a shortcut to replaying the first n-sequences but didn’t really help us learn a generic model of the alphabet.

    In this section, you will explore a variation of the “stateless” LSTM that learns random subsequences of the alphabet and an effort to build a model that can be given arbitrary letters or subsequences of letters and predict the next letter in the alphabet.

    First, you are changing the framing of the problem. To simplify, you will define a maximum input sequence length and set it to a small value like 5 to speed up training. This defines the maximum length of subsequences of the alphabet that will be drawn for training. In extensions, this could just be set to the full alphabet (26) or longer if you allow looping back to the start of the sequence.

    You also need to define the number of random sequences to create—in this case, 1000. This, too, could be more or less. It’s likely fewer patterns are actually required.

    Running this code in the broader context will create input patterns that look like the following:

    The input sequences vary in length between 1 and max_len and therefore require zero padding. Here, use left-hand-side (prefix) padding with the Keras built-in pad_sequences() function.

    The trained model is evaluated on randomly selected input patterns. This could just as easily be new randomly generated sequences of characters. This could also be a linear sequence seeded with “A” with outputs fed back in as single character inputs.

    The full code listing is provided below for completeness.

    Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

    Running this code produces the following output:

    You can see that although the model did not learn the alphabet perfectly from the randomly generated subsequences, it did very well. The model was not tuned and might require more training, a larger network, or both (an exercise for the reader).

    This is a good natural extension to the “all sequential input examples in each batch” alphabet model learned above in that it can handle ad hoc queries, but this time of arbitrary sequence length (up to the max length).

    Summary

    In this post, you discovered LSTM recurrent neural networks in Keras and how they manage state.

    Specifically, you learned:

    • How to develop a naive LSTM network for one-character to one-character prediction
    • How to configure a naive LSTM to learn a sequence across time steps within a sample
    • How to configure an LSTM to learn a sequence across samples by manually managing state

    Do you have any questions about managing an LSTM state or this post?
    Ask your questions in the comment, and I will do my best to answer.

No comments:

Post a Comment

Connect broadband