Another type of neural network is dominating difficult machine learning problems involving sequences of inputs: recurrent neural networks.
Recurrent neural networks have connections that have loops, adding feedback and memory to the networks over time. This memory allows this type of network to learn and generalize across sequences of inputs rather than individual patterns.
A powerful type of Recurrent Neural Network called the Long Short-Term Memory Network has been shown to be particularly effective when stacked into a deep configuration, achieving state-of-the-art results on a diverse array of problems from language translation to automatic captioning of images and videos.
In this post, you will get a crash course in recurrent neural networks for deep learning, acquiring just enough understanding to start using LSTM networks in Python with Keras.
After reading this post, you will know:
- The limitations of Multilayer Perceptrons that are addressed by recurrent neural networks
- The problems that must be addressed to make Recurrent Neural networks useful
- The details of the Long Short-Term Memory networks used in applied deep learning
Support for Sequences in Neural Networks
Some problem types are best framed by involving either a sequence as an input or an output.
For example, consider a univariate time series problem, like the price of a stock over time. This dataset can be framed as a prediction problem for a classical feed-forward multilayer perceptron network by defining a window’s size (e.g., 5) and training the network to learn to make short-term predictions from the fixed-sized window of inputs.
This would work but is very limited. The window of inputs adds memory to the problem but is limited to just a fixed number of points and must be chosen with sufficient knowledge of the problem. A naive window would not capture the broader trends over minutes, hours, and days that might be relevant to making a prediction. From one prediction to the next, the network only knows about the specific inputs it is provided.
Univariate time series prediction is important, but there are even more interesting problems that involve sequences.
Consider the following taxonomy of sequence problems that require mapping an input to output (taken from Andrej Karpathy).
- One-to-Many: sequence output for image captioning
- Many-to-One: sequence input for sentiment classification
- Many-to-Many: sequence in and out for machine translation
- Synched Many-to-Many: synced sequences in and out for video classification
You can also see that a one-to-one example of an input to output would be an example of a classical feed-forward neural network for a prediction task like image classification.
Support for sequences in neural networks is an important class of problem and one where deep learning has recently shown impressive results. State-of-the-art results have been using a type of network specifically designed for sequence problems called recurrent neural networks.
Need help with LSTMs for Sequence Prediction?
Take my free 7-day email course and discover 6 different LSTM architectures (with code).
Click to sign-up and also get a free PDF Ebook version of the course.
Recurrent Neural Networks
Recurrent neural networks or RNNs are a special type of neural network designed for sequence problems.
Given a standard feed-forward multilayer Perceptron network, a recurrent neural network can be thought of as the addition of loops to the architecture. For example, in a given layer, each neuron may pass its signal latterly (sideways) in addition to forward to the next layer. The output of the network may feed back as an input to the network with the next input vector. And so on.
The recurrent connections add state or memory to the network and allow it to learn broader abstractions from the input sequences.
The field of recurrent neural networks is well established with popular methods. For the techniques to be effective on real problems, two major issues needed to be resolved for the network to be useful.
- How to train the network with backpropagation
- How to stop gradients vanishing or exploding during training
1. How to Train Recurrent Neural Networks
The staple technique for training feed-forward neural networks is to backpropagate error and update the network weights.
Backpropagation breaks down in a recurrent neural network because of the recurrent or loop connections.
This was addressed with a modification of the backpropagation technique called Backpropagation Through Time or BPTT.
Instead of performing backpropagation on the recurrent network as stated, the structure of the network is unrolled, where copies of the neurons that have recurrent connections are created. For example, a single neuron with a connection to itself (A->A) could be represented as two neurons with the same weight values (A->B).
This allows the cyclic graph of a recurrent neural network to be turned into an acyclic graph like a classic feed-forward neural network, and backpropagation can be applied.
2. How to Have Stable Gradients During Training
When backpropagation is used in very deep neural networks and unrolled recurrent neural networks, the gradients that are calculated to update the weights can become unstable.
They can become very large numbers called exploding gradients, or very small numbers called the vanishing gradient problem. These large numbers, in turn, are used to update the weights in the network, making training unstable and the network unreliable.
This problem is alleviated in deep multilayer perceptron networks through the use of the rectifier transfer function and even more exotic but now less popular approaches of using unsupervised pre-training of layers.
In recurrent neural network architectures, this problem has been alleviated using a new type of architecture called the Long Short-Term Memory Networks that allows deep recurrent networks to be trained.
Long Short-Term Memory Networks
The Long Short-Term Memory or LSTM network is a recurrent neural network trained using Backpropagation Through Time and overcomes the vanishing gradient problem.
As such, it can be used to create large (stacked) recurrent networks that, in turn, can be used to address difficult sequence problems in machine learning and achieve state-of-the-art results.
Instead of neurons, LSTM networks have memory blocks connected into layers.
A block has components that make it smarter than a classical neuron and a memory for recent sequences. A block contains gates that manage the block’s state and output. A unit operates upon an input sequence, and each gate within a unit uses the sigmoid activation function to control whether it is triggered or not, making the change of state and addition of information flowing through the unit conditional.
There are three types of gates within a memory unit:
- Forget Gate: conditionally decides what information to discard from the unit.
- Input Gate: conditionally decides which values from the input to update the memory state.
- Output Gate: conditionally decides what to output based on input and the memory of the unit.
Each unit is like a mini state machine where the gates of the units have weights that are learned during the training procedure.
You can see how you may achieve sophisticated learning and memory from a layer of LSTMs, and it is not hard to imagine how higher-order abstractions may be layered with such multiple layers.
Resources
You have covered a lot of ground in this post. Below are some resources that you can use to go deeper into the topic of recurrent neural networks for deep learning.
Resources to learn more about recurrent neural networks and LSTMs.
- Recurrent neural network on Wikipedia
- Long Short-Term Memory on Wikipedia
- The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy
- Understanding LSTM Networks
- Deep Dive into Recurrent Neural Nets
- A Beginner’s Guide to Recurrent Networks and LSTMs
Popular tutorials for implementing LSTMs.
- LSTMs for language modeling with TensorFlow
- RNN for Spoken Word Understanding in Theano
- LSTM for sentiment analysis in Theano
Primary sources on LSTMs.
- Long short-term memory [pdf], 1997 paper by Hochreiter and Schmidhuber
- Learning to forget: Continual prediction with LSTM, 2000 by Schmidhuber and Cummins that add the forget gate
- On the difficulty of training Recurrent Neural Networks [pdf], 2013
People to follow doing great work with LSTMs.
Summary
In this post, you discovered sequence problems and recurrent neural networks that can be used to address them.
Specifically, you learned:
- The limitations of classical feed-forward neural networks and how recurrent neural networks can overcome these problems
- The practical problems in training recurrent neural networks and how they are overcome
- The Long Short-Term Memory network used to create deep recurrent neural networks
Do you have any questions about deep recurrent neural networks, LSTMs, or this post? Ask your question in the comments, and I will do my best to answer.
No comments:
Post a Comment