LSTM
LSTM is an architecture that was introduced back in 1997 by Jürgen Schmidhuber and Sepp Hochreiter. In this architecture, there are not one but two hidden states. In our base RNN, the hidden state is the output of the RNN at the previous time step. That hidden state is then responsible for two things:
- Having the right information for the output layer to predict the correct next token
- Retaining memory of everything that happened in the sentence
Consider, for example, the sentences “Henry has a dog and he likes his dog very much” and “Sophie has a dog and she likes her dog very much.” It’s very clear that the RNN needs to remember the name at the beginning of the sentence to be able to predict he/she or his/her.
In practice, RNNs are really bad at retaining memory of what happened much earlier in the sentence, which is the motivation to have another hidden state (called cell state) in the LSTM. The cell state will be responsible for keeping long short-term memory, while the hidden state will focus on the next token to predict. Let’s take a closer look at how this is achieved and build an LSTM from scratch.
Building an LSTM from Scratch
In order to build an LSTM, we first have to understand its architecture. <> shows its inner structure.
In this picture, our input $x_{t}$ enters on the left with the previous hidden state ($h_{t-1}$) and cell state ($c_{t-1}$). The four orange boxes represent four layers (our neural nets) with the activation being either sigmoid ($\sigma$) or tanh. tanh is just a sigmoid function rescaled to the range -1 to 1. Its mathematical expression can be written like this:
\tanh(x) = \frac{e^{x} - e^{-x}}{e^{x}+e^{-x}} = 2 \sigma(2x) - 1
where $\sigma$ is the sigmoid function. The green circles are elementwise operations. What goes out on the right is the new hidden state ($h_{t}$) and new cell state ($c_{t}$), ready for our next input. The new hidden state is also used as output, which is why the arrow splits to go up.
Let’s go over the four neural nets (called gates) one by one and explain the diagram—but before this, notice how very little the cell state (at the top) is changed. It doesn’t even go directly through a neural net! This is exactly why it will carry on a longer-term state.
First, the arrows for input and old hidden state are joined together. In the RNN we wrote earlier in this chapter, we were adding them together. In the LSTM, we stack them in one big tensor. This means the dimension of our embeddings (which is the dimension of $x_{t}$) can be different than the dimension of our hidden state. If we call those n_in
and n_hid
, the arrow at the bottom is of size n_in + n_hid
; thus all the neural nets (orange boxes) are linear layers with n_in + n_hid
inputs and n_hid
outputs.
The first gate (looking from left to right) is called the forget gate. Since it’s a linear layer followed by a sigmoid, its output will consist of scalars between 0 and 1. We multiply this result by the cell state to determine which information to keep and which to throw away: values closer to 0 are discarded and values closer to 1 are kept. This gives the LSTM the ability to forget things about its long-term state. For instance, when crossing a period or an xxbos
token, we would expect to it to (have learned to) reset its cell state.
The second gate is called the input gate. It works with the third gate (which doesn’t really have a name but is sometimes called the cell gate) to update the cell state. For instance, we may see a new gender pronoun, in which case we’ll need to replace the information about gender that the forget gate removed. Similar to the forget gate, the input gate decides which elements of the cell state to update (values close to 1) or not (values close to 0). The third gate determines what those updated values are, in the range of –1 to 1 (thanks to the tanh function). The result is then added to the cell state.
The last gate is the output gate. It determines which information from the cell state to use to generate the output. The cell state goes through a tanh before being combined with the sigmoid output from the output gate, and the result is the new hidden state.
In terms of code, we can write the same steps like this:
In [ ]:
class LSTMCell(Module):
def __init__(self, ni, nh):
self.forget_gate = nn.Linear(ni + nh, nh)
self.input_gate = nn.Linear(ni + nh, nh)
self.cell_gate = nn.Linear(ni + nh, nh)
self.output_gate = nn.Linear(ni + nh, nh)
def forward(self, input, state):
h,c = state
h = torch.cat([h, input], dim=1)
forget = torch.sigmoid(self.forget_gate(h))
c = c * forget
inp = torch.sigmoid(self.input_gate(h))
cell = torch.tanh(self.cell_gate(h))
c = c + inp * cell
out = torch.sigmoid(self.output_gate(h))
h = out * torch.tanh(c)
return h, (h,c)
In practice, we can then refactor the code. Also, in terms of performance, it’s better to do one big matrix multiplication than four smaller ones (that’s because we only launch the special fast kernel on the GPU once, and it gives the GPU more work to do in parallel). The stacking takes a bit of time (since we have to move one of the tensors around on the GPU to have it all in a contiguous array), so we use two separate layers for the input and the hidden state. The optimized and refactored code then looks like this:
In [ ]:
class LSTMCell(Module):
def __init__(self, ni, nh):
self.ih = nn.Linear(ni,4*nh)
self.hh = nn.Linear(nh,4*nh)
def forward(self, input, state):
h,c = state
# One big multiplication for all the gates is better than 4 smaller ones
gates = (self.ih(input) + self.hh(h)).chunk(4, 1)
ingate,forgetgate,outgate = map(torch.sigmoid, gates[:3])
cellgate = gates[3].tanh()
c = (forgetgate*c) + (ingate*cellgate)
h = outgate * c.tanh()
return h, (h,c)
Here we use the PyTorch chunk
method to split our tensor into four pieces. It works like this:
In [ ]:
t = torch.arange(0,10); t
Out[ ]:
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [ ]:
t.chunk(2)
Out[ ]:
(tensor([0, 1, 2, 3, 4]), tensor([5, 6, 7, 8, 9]))
Let’s now use this architecture to train a language model!
Training a Language Model Using LSTMs
Here is the same network as LMModel5
, using a two-layer LSTM. We can train it at a higher learning rate, for a shorter time, and get better accuracy:
In [ ]:
class LMModel6(Module):
def __init__(self, vocab_sz, n_hidden, n_layers):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.rnn = nn.LSTM(n_hidden, n_hidden, n_layers, batch_first=True)
self.h_o = nn.Linear(n_hidden, vocab_sz)
self.h = [torch.zeros(n_layers, bs, n_hidden) for _ in range(2)]
def forward(self, x):
res,h = self.rnn(self.i_h(x), self.h)
self.h = [h_.detach() for h_ in h]
return self.h_o(res)
def reset(self):
for h in self.h: h.zero_()
In [ ]:
learn = Learner(dls, LMModel6(len(vocab), 64, 2),
loss_func=CrossEntropyLossFlat(),
metrics=accuracy, cbs=ModelResetter)
learn.fit_one_cycle(15, 1e-2)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 3.000821 | 2.663942 | 0.438314 | 00:02 |
1 | 2.139642 | 2.184780 | 0.240479 | 00:02 |
2 | 1.607275 | 1.812682 | 0.439779 | 00:02 |
3 | 1.347711 | 1.830982 | 0.497477 | 00:02 |
4 | 1.123113 | 1.937766 | 0.594401 | 00:02 |
5 | 0.852042 | 2.012127 | 0.631592 | 00:02 |
6 | 0.565494 | 1.312742 | 0.725749 | 00:02 |
7 | 0.347445 | 1.297934 | 0.711263 | 00:02 |
8 | 0.208191 | 1.441269 | 0.731201 | 00:02 |
9 | 0.126335 | 1.569952 | 0.737305 | 00:02 |
10 | 0.079761 | 1.427187 | 0.754150 | 00:02 |
11 | 0.052990 | 1.494990 | 0.745117 | 00:02 |
12 | 0.039008 | 1.393731 | 0.757894 | 00:02 |
13 | 0.031502 | 1.373210 | 0.758464 | 00:02 |
14 | 0.028068 | 1.368083 | 0.758464 | 00:02 |
Now that’s better than a multilayer RNN! We can still see there is a bit of overfitting, however, which is a sign that a bit of regularization might help.