The encoder-decoder model provides a pattern for using recurrent neural networks to address challenging sequence-to-sequence prediction problems such as machine translation.
Encoder-decoder models can be developed in the Keras Python deep learning library and an example of a neural machine translation system developed with this model has been described on the Keras blog, with sample code distributed with the Keras project.
This example can provide the basis for developing encoder-decoder LSTM models for your own sequence-to-sequence prediction problems.
In this tutorial, you will discover how to develop a sophisticated encoder-decoder recurrent neural network for sequence-to-sequence prediction problems with Keras.
After completing this tutorial, you will know:
- How to correctly define a sophisticated encoder-decoder model in Keras for sequence-to-sequence prediction.
- How to define a contrived yet scalable sequence-to-sequence prediction problem that you can use to evaluate the encoder-decoder LSTM model.
- How to apply the encoder-decoder LSTM model in Keras to address the scalable integer sequence-to-sequence prediction problem.
Kick-start your project with my new book Long Short-Term Memory Networks With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Update Jan/2020: Updated API for Keras 2.3 and TensorFlow 2.0.
Tutorial Overview
This tutorial is divided into 3 parts; they are:
- Encoder-Decoder Model in Keras
- Scalable Sequence-to-Sequence Problem
- Encoder-Decoder LSTM for Sequence Prediction
Python Environment
This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this tutorial.
You must have Keras (2.0 or higher) installed with either the TensorFlow or Theano backend.
The tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.
If you need help with your environment, see this post:
Encoder-Decoder Model in Keras
The encoder-decoder model is a way of organizing recurrent neural networks for sequence-to-sequence prediction problems.
It was originally developed for machine translation problems, although it has proven successful at related sequence-to-sequence prediction problems such as text summarization and question answering.
The approach involves two recurrent neural networks, one to encode the source sequence, called the encoder, and a second to decode the encoded source sequence into the target sequence, called the decoder.
The Keras deep learning Python library provides an example of how to implement the encoder-decoder model for machine translation (lstm_seq2seq.py) described by the libraries creator in the post: “A ten-minute introduction to sequence-to-sequence learning in Keras.”
For a detailed breakdown of this model see the post:
For more information on the use of return_state, which might be new to you, see the post:
For more help getting started with the Keras Functional API, see the post:
Using the code in that example as a starting point, we can develop a generic function to define an encoder-decoder recurrent neural network. Below is this function named define_models().
The function takes 3 arguments, as follows:
- n_input: The cardinality of the input sequence, e.g. number of features, words, or characters for each time step.
- n_output: The cardinality of the output sequence, e.g. number of features, words, or characters for each time step.
- n_units: The number of cells to create in the encoder and decoder models, e.g. 128 or 256.
The function then creates and returns 3 models, as follows:
- train: Model that can be trained given source, target, and shifted target sequences.
- inference_encoder: Encoder model used when making a prediction for a new source sequence.
- inference_decoder Decoder model use when making a prediction for a new source sequence.
The model is trained given source and target sequences where the model takes both the source and a shifted version of the target sequence as input and predicts the whole target sequence.
For example, one source sequence may be [1,2,3] and the target sequence [4,5,6]. The inputs and outputs to the model during training would be:
The model is intended to be called recursively when generating target sequences for new source sequences.
The source sequence is encoded and the target sequence is generated one element at a time, using a “start of sequence” character such as ‘_’ to start the process. Therefore, in the above case, the following input-output pairs would occur during training:
Here you can see how the recursive use of the model can be used to build up output sequences.
During prediction, the inference_encoder model is used to encode the input sequence once which returns states that are used to initialize the inference_decoder model. From that point, the inference_decoder model is used to generate predictions step by step.
The function below named predict_sequence() can be used after the model is trained to generate a target sequence given a source sequence.
This function takes 5 arguments as follows:
- infenc: Encoder model used when making a prediction for a new source sequence.
- infdec: Decoder model use when making a prediction for a new source sequence.
- source:Encoded source sequence.
- n_steps: Number of time steps in the target sequence.
- cardinality: The cardinality of the output sequence, e.g. the number of features, words, or characters for each time step.
The function then returns a list containing the target sequence.
Scalable Sequence-to-Sequence Problem
In this section, we define a contrived and scalable sequence-to-sequence prediction problem.
The source sequence is a series of randomly generated integer values, such as [20, 36, 40, 10, 34, 28], and the target sequence is a reversed pre-defined subset of the input sequence, such as the first 3 elements in reverse order [40, 36, 20].
The length of the source sequence is configurable; so is the cardinality of the input and output sequence and the length of the target sequence.
We will use source sequences of 6 elements, a cardinality of 50, and target sequences of 3 elements.
Below are some more examples to make this concrete.
You are encouraged to explore larger and more complex variations. Post your findings in the comments below.
Let’s start off by defining a function to generate a sequence of random integers.
We will use the value of 0 as the padding or start of sequence character, therefore it is reserved and we cannot use it in our source sequences. To achieve this, we will add 1 to our configured cardinality to ensure the one-hot encoding is large enough (e.g. a value of 1 maps to a ‘1’ value in index 1).
For example:
We can use the randint() python function to generate random integers in a range between 1 and 1-minus the size of the problem’s cardinality. The generate_sequence() below generates a sequence of random integers.
Next, we need to create the corresponding output sequence given the source sequence.
To keep thing simple, we will select the first n elements of the source sequence as the target sequence and reverse them.
We also need a version of the output sequence shifted forward by one time step that we can use as the mock target generated so far, including the start of sequence value in the first time step. We can create this from the target sequence directly.
Now that all of the sequences have been defined, we can one-hot encode them, i.e. transform them into sequences of binary vectors. We can use the Keras built in to_categorical() function to achieve this.
We can put all of this into a function named get_dataset() that will generate a specific number of sequences that we can use to train a model.
Finally, we need to be able to decode a one-hot encoded sequence to make it readable again.
This is needed for both printing the generated target sequences but also for easily comparing whether the full predicted target sequence matches the expected target sequence. The one_hot_decode() function will decode an encoded sequence.
We can tie all of this together and test these functions.
A complete worked example is listed below.
Running the example first prints the shape of the generated dataset, ensuring the 3D shape required to train the model matches our expectations.
The generated sequence is then decoded and printed to screen demonstrating both that the preparation of source and target sequences matches our intention and that the decode operation is working.
We are now ready to develop a model for this sequence-to-sequence prediction problem.
Encoder-Decoder LSTM for Sequence Prediction
In this section, we will apply the encoder-decoder LSTM model developed in the first section to the sequence-to-sequence prediction problem developed in the second section.
The first step is to configure the problem.
Next, we must define the models and compile the training model.
Next, we can generate a training dataset of 100,000 examples and train the model.
Once the model is trained, we can evaluate it. We will do this by making predictions for 100 source sequences and counting the number of target sequences that were predicted correctly. We will use the numpy array_equal() function on the decoded sequences to check for equality.
Finally, we will generate some predictions and print the decoded source, target, and predicted target sequences to get an idea of whether the model is working as expected.
Putting all of these elements together, the complete code example is listed below.
Running the example first prints the shape of the prepared dataset.
Next, the model is fit. You should see a progress bar and the run should take less than one minute on a modern multi-core CPU.
Next, the model is evaluated and the accuracy printed.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
We can see that the model achieves 100% accuracy on new randomly generated examples.
Finally, 10 new examples are generated and target sequences are predicted. Again, we can see that the model correctly predicts the output sequence in each case and the expected value matches the reversed first 3 elements of the source sequences.
You now have a template for an encoder-decoder LSTM model that you can apply to your own sequence-to-sequence prediction problems.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Related Posts
- How to Setup a Python Environment for Machine Learning and Deep Learning with Anaconda
- How to Define an Encoder-Decoder Sequence-to-Sequence Model for Neural Machine Translation in Keras
- Understand the Difference Between Return Sequences and Return States for LSTMs in Keras
- How to Use the Keras Functional API for Deep Learning
Keras Resources
- A ten-minute introduction to sequence-to-sequence learning in Keras
- Keras seq2seq Code Example (lstm_seq2seq)
- Keras Functional API
- LSTM API in Keras
Summary
In this tutorial, you discovered how to develop an encoder-decoder recurrent neural network for sequence-to-sequence prediction problems with Keras.
Specifically, you learned:
- How to correctly define a sophisticated encoder-decoder model in Keras for sequence-to-sequence prediction.
- How to define a contrived yet scalable sequence-to-sequence prediction problem that you can use to evaluate the encoder-decoder LSTM model.
- How to apply the encoder-decoder LSTM model in Keras to address the scalable integer sequence-to-sequence prediction problem.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
No comments:
Post a Comment