Recurrent neural networks can also be used as generative models.
This means that in addition to being used for predictive models (making predictions), they can learn the sequences of a problem and then generate entirely new plausible sequences for the problem domain.
Generative models like this are useful not only to study how well a model has learned a problem but also to learn more about the problem domain itself.
In this post, you will discover how to create a generative model for text, character-by-character using LSTM recurrent neural networks in Python with Keras.
After reading this post, you will know:
- Where to download a free corpus of text that you can use to train text generative models
- How to frame the problem of text sequences to a recurrent neural network generative model
- How to develop an LSTM to generate plausible text sequences for a given problemLet’s get started.
Note: LSTM recurrent neural networks can be slow to train, and it is highly recommended that you train them on GPU hardware. You can access GPU hardware in the cloud very cheaply using Amazon Web Services.
Problem Description: Project Gutenberg
Many of the classical texts are no longer protected under copyright.
This means you can download all the text for these books for free and use them in experiments, like creating generative models. Perhaps the best place to get access to free books that are no longer protected by copyright is Project Gutenberg.
In this tutorial, you will use a favorite book from childhood as the dataset: Alice’s Adventures in Wonderland by Lewis Carroll.
You will learn the dependencies between characters and the conditional probabilities of characters in sequences so that you can, in turn, generate wholly new and original sequences of characters.
This is a lot of fun, and repeating these experiments with other books from Project Gutenberg is recommended. Here is a list of the most popular books on the site.
These experiments are not limited to text; you can also experiment with other ASCII data, such as computer source code, marked-up documents in LaTeX, HTML or Markdown, and more.
You can download the complete text in ASCII format (Plain Text UTF-8) for this book for free and place it in your working directory with the filename wonderland.txt.
Now, you need to prepare the dataset ready for modeling.
Project Gutenberg adds a standard header and footer to each book, which is not part of the original text. Open the file in a text editor and delete the header and footer.
The header is obvious and ends with the text:
The footer is all the text after the line of text that says:
You should be left with a text file that has about 3,330 lines of text.
Need help with LSTMs for Sequence Prediction?
Take my free 7-day email course and discover 6 different LSTM architectures (with code).
Click to sign-up and also get a free PDF Ebook version of the course.
Develop a Small LSTM Recurrent Neural Network
In this section, you will develop a simple LSTM network to learn sequences of characters from Alice in Wonderland. In the next section, you will use this model to generate new sequences of characters.
Let’s start by importing the classes and functions you will use to train your model.
Next, you need to load the ASCII text for the book into memory and convert all of the characters to lowercase to reduce the vocabulary the network must learn.
Now that the book is loaded, you must prepare the data for modeling by the neural network. You cannot model the characters directly; instead, you must convert the characters to integers.
You can do this easily by first creating a set of all of the distinct characters in the book, then creating a map of each character to a unique integer.
For example, the list of unique sorted lowercase characters in the book is as follows:
You can see that there may be some characters that we could remove to further clean up the dataset to reduce the vocabulary, which may improve the modeling process.
Now that the book has been loaded and the mapping prepared, you can summarize the dataset.
Running the code to this point produces the following output.
You can see the book has just under 150,000 characters, and when converted to lowercase, there are only 47 distinct characters in the vocabulary for the network to learn—much more than the 26 in the alphabet.
You now need to define the training data for the network. There is a lot of flexibility in how you choose to break up the text and expose it to the network during training.
In this tutorial, you will split the book text up into subsequences with a fixed length of 100 characters, an arbitrary length. You could just as easily split the data by sentences, padding the shorter sequences and truncating the longer ones.
Each training pattern of the network comprises 100 time steps of one character (X) followed by one character output (y). When creating these sequences, you slide this window along the whole book one character at a time, allowing each character a chance to be learned from the 100 characters that preceded it (except the first 100 characters, of course).
For example, if the sequence length is 5 (for simplicity), then the first two training patterns would be as follows:
As you split the book into these sequences, you convert the characters to integers using the lookup table you prepared earlier.
Running the code to this point shows that when you split up the dataset into training data for the network to learn that you have just under 150,000 training patterns. This makes sense as, excluding the first 100 characters, you have one training pattern to predict each of the remaining characters.
Now that you have prepared your training data, you need to transform it to be suitable for use with Keras.
First, you must transform the list of input sequences into the form [samples, time steps, features] expected by an LSTM network.
Next, you need to rescale the integers to the range 0-to-1 to make the patterns easier to learn by the LSTM network using the sigmoid activation function by default.
Finally, you need to convert the output patterns (single characters converted to integers) into a one-hot encoding. This is so that you can configure the network to predict the probability of each of the 47 different characters in the vocabulary (an easier representation) rather than trying to force it to predict precisely the next character. Each y value is converted into a sparse vector with a length of 47, full of zeros, except with a 1 in the column for the letter (integer) that the pattern represents.
For example, when “n” (integer value 31) is one-hot encoded, it looks as follows:
You can implement these steps as below:
You can now define your LSTM model. Here, you define a single hidden LSTM layer with 256 memory units. The network uses dropout with a probability of 20. The output layer is a Dense layer using the softmax activation function to output a probability prediction for each of the 47 characters between 0 and 1.
The problem is really a single character classification problem with 47 classes and, as such, is defined as optimizing the log loss (cross entropy) using the ADAM optimization algorithm for speed.
There is no test dataset. You are modeling the entire training dataset to learn the probability of each character in a sequence.
You are not interested in the most accurate (classification accuracy) model of the training dataset. This would be a model that predicts each character in the training dataset perfectly. Instead, you are interested in a generalization of the dataset that minimizes the chosen loss function. You are seeking a balance between generalization and overfitting but short of memorization.
The network is slow to train (about 300 seconds per epoch on an Nvidia K520 GPU). Because of the slowness and because of the optimization requirements, use model checkpointing to record all the network weights to file each time an improvement in loss is observed at the end of the epoch. You will use the best set of weights (lowest loss) to instantiate your generative model in the next section.
You can now fit your model to the data. Here, you use a modest number of 20 epochs and a large batch size of 128 patterns.
The full code listing is provided below for completeness.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
After running the example, you should have a number of weight checkpoint files in the local directory.
You can delete them all except the one with the smallest loss value. For example, when this example was run, you can see below the checkpoint with the smallest loss that was achieved.
The network loss decreased almost every epoch, so the network could likely benefit from training for many more epochs.
In the next section, you will look at using this model to generate new text sequences.
Generating Text with an LSTM Network
Generating text using the trained LSTM network is relatively straightforward.
First, you will load the data and define the network in exactly the same way, except the network weights are loaded from a checkpoint file, and the network does not need to be trained.
Also, when preparing the mapping of unique characters to integers, you must also create a reverse mapping that you can use to convert the integers back to characters so that you can understand the predictions.
Finally, you need to actually make predictions.
The simplest way to use the Keras LSTM model to make predictions is to first start with a seed sequence as input, generate the next character, then update the seed sequence to add the generated character on the end and trim off the first character. This process is repeated for as long as you want to predict new characters (e.g., a sequence of 1,000 characters in length).
You can pick a random input pattern as your seed sequence, then print generated characters as you generate them.
The full code example for generating text using the loaded LSTM model is listed below for completeness.
Running this example first outputs the selected random seed, then each character as it is generated.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
For example, below are the results from one run of this text generator. The random seed was:
The generated text with the random seed (cleaned up for presentation) was:
Let’s note some observations about the generated text.
- It generally conforms to the line format observed in the original text of fewer than 80 characters before a new line.
- The characters are separated into word-like groups, and most groups are actual English words (e.g., “the,” “little,” and “was”), but many are not (e.g., “lott,” “tiie,” and “taede”).
- Some of the words in sequence make sense(e.g., “and the white rabbit“), but many do not (e.g., “wese tilel“).
The fact that this character-based model of the book produces output like this is very impressive. It gives you a sense of the learning capabilities of LSTM networks.
However, the results are not perfect.
In the next section, you will look at improving the quality of results by developing a much larger LSTM network.
Larger LSTM Recurrent Neural Network
You got results, but not excellent results in the previous section. Now, you can try to improve the quality of the generated text by creating a much larger network.
You will keep the number of memory units the same at 256 but add a second layer.
You will also change the filename of the checkpointed weights so that you can tell the difference between weights for this network and the previous (by appending the word “bigger” in the filename).
Finally, you will increase the number of training epochs from 20 to 50 and decrease the batch size from 128 to 64 to give the network more of an opportunity to be updated and learn.
The full code listing is presented below for completeness.
Running this example takes some time, at least 700 seconds per epoch.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
After running this example, you may achieve a loss of about 1.2. For example, the best result achieved from running this model was stored in a checkpoint file with the name:
This achieved a loss of 1.2219 at epoch 47.
As in the previous section, you can use this best model from the run to generate text.
The only change you need to make to the text generation script from the previous section is in the specification of the network topology and from which file to seed the network weights.
The full code listing is provided below for completeness.
One example of running this text generation script produces the output below.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
The randomly chosen seed text was:
The generated text with the seed (cleaned up for presentation) was :
You can see that there are generally fewer spelling mistakes, and the text looks more realistic but is still quite nonsensical.
For example, the same phrases get repeated again and again, like “said to herself” and “little.” Quotes are opened but not closed.
These are better results, but there is still a lot of room for improvement.
10 Extension Ideas to Improve the Model
Below are ten ideas that may further improve the model that you could experiment with are:
- Predict fewer than 1,000 characters as output for a given seed
- Remove all punctuation from the source text and, therefore, from the models’ vocabulary
- Try a one-hot encoding for the input sequences
- Train the model on padded sentences rather than random sequences of characters
- Increase the number of training epochs to 100 or many hundreds
- Add dropout to the visible input layer and consider tuning the dropout percentage
- Tune the batch size; try a batch size of 1 as a (very slow) baseline and larger sizes from there
- Add more memory units to the layers and/or more layers
- Experiment with scale factors (temperature) when interpreting the prediction probabilities
- Change the LSTM layers to be “stateful” to maintain state across batches
Did you try any of these extensions? Share your results in the comments.
Resources
This character text model is a popular way of generating text using recurrent neural networks.
Below are some more resources and tutorials on the topic if you are interested in going deeper. Perhaps the most popular is the tutorial by Andrej Karpathy titled “The Unreasonable Effectiveness of Recurrent Neural Networks.”
- Generating Text with Recurrent Neural Networks [pdf], 2011
- Keras code example of LSTM for text generation
- Lasagne code example of LSTM for text generation
- MXNet tutorial for using an LSTM for text generation
- Auto-Generating Clickbait With Recurrent Neural Networks
Summary
In this post, you discovered how you can develop an LSTM recurrent neural network for text generation in Python with the Keras deep learning library.
After reading this post, you know:
- Where to download the ASCII text for classical books for free that you can use for training
- How to train an LSTM network on text sequences and how to use the trained network to generate new sequences
- How to develop stacked LSTM networks and lift the performance of the model
Do you have any questions about text generation with LSTM networks or this post? Ask your questions in the comments below, and I will do my best to answer them.
No comments:
Post a Comment