Improving the RNN
Looking at the code for our RNN, one thing that seems problematic is that we are initializing our hidden state to zero for every new input sequence. Why is that a problem? We made our sample sequences short so they would fit easily into batches. But if we order the samples correctly, those sample sequences will be read in order by the model, exposing the model to long stretches of the original sequence.
Another thing we can look at is having more signal: why only predict the fourth word when we could use the intermediate predictions to also predict the second and third words?
Let’s see how we can implement those changes, starting with adding some state.
Maintaining the State of an RNN
Because we initialize the model’s hidden state to zero for each new sample, we are throwing away all the information we have about the sentences we have seen so far, which means that our model doesn’t actually know where we are up to in the overall counting sequence. This is easily fixed; we can simply move the initialization of the hidden state to __init__
.
But this fix will create its own subtle, but important, problem. It effectively makes our neural network as deep as the entire number of tokens in our document. For instance, if there were 10,000 tokens in our dataset, we would be creating a 10,000-layer neural network.
To see why this is the case, consider the original pictorial representation of our recurrent neural network in <>, before refactoring it with a for
loop. You can see each layer corresponds with one token input. When we talk about the representation of a recurrent neural network before refactoring with the for
loop, we call this the unrolled representation. It is often helpful to consider the unrolled representation when trying to understand an RNN.
The problem with a 10,000-layer neural network is that if and when you get to the 10,000th word of the dataset, you will still need to calculate the derivatives all the way back to the first layer. This is going to be very slow indeed, and very memory-intensive. It is unlikely that you’ll be able to store even one mini-batch on your GPU.
The solution to this problem is to tell PyTorch that we do not want to back propagate the derivatives through the entire implicit neural network. Instead, we will just keep the last three layers of gradients. To remove all of the gradient history in PyTorch, we use the detach
method.
Here is the new version of our RNN. It is now stateful, because it remembers its activations between different calls to forward
, which represent its use for different samples in the batch:
In [ ]:
class LMModel3(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
self.h = 0
def forward(self, x):
for i in range(3):
self.h = self.h + self.i_h(x[:,i])
self.h = F.relu(self.h_h(self.h))
out = self.h_o(self.h)
self.h = self.h.detach()
return out
def reset(self): self.h = 0
This model will have the same activations whatever sequence length we pick, because the hidden state will remember the last activation from the previous batch. The only thing that will be different is the gradients computed at each step: they will only be calculated on sequence length tokens in the past, instead of the whole stream. This approach is called backpropagation through time (BPTT).
jargon: Back propagation through time (BPTT): Treating a neural net with effectively one layer per time step (usually refactored using a loop) as one big model, and calculating gradients on it in the usual way. To avoid running out of memory and time, we usually use truncated BPTT, which “detaches” the history of computation steps in the hidden state every few time steps.
To use LMModel3
, we need to make sure the samples are going to be seen in a certain order. As we saw in <>, if the first line of the first batch is our dset[0]
then the second batch should have dset[1]
as the first line, so that the model sees the text flowing.
LMDataLoader
was doing this for us in <>. This time we’re going to do it ourselves.
To do this, we are going to rearrange our dataset. First we divide the samples into m = len(dset) // bs
groups (this is the equivalent of splitting the whole concatenated dataset into, for example, 64 equally sized pieces, since we’re using bs=64
here). m
is the length of each of these pieces. For instance, if we’re using our whole dataset (although we’ll actually split it into train versus valid in a moment), that will be:
In [ ]:
m = len(seqs)//bs
m,bs,len(seqs)
Out[ ]:
(328, 64, 21031)
The first batch will be composed of the samples:
(0, m, 2*m, ..., (bs-1)*m)
the second batch of the samples:
(1, m+1, 2*m+1, ..., (bs-1)*m+1)
and so forth. This way, at each epoch, the model will see a chunk of contiguous text of size 3*m
(since each text is of size 3) on each line of the batch.
The following function does that reindexing:
In [ ]:
def group_chunks(ds, bs):
m = len(ds) // bs
new_ds = L()
for i in range(m): new_ds += L(ds[i + m*j] for j in range(bs))
return new_ds
Then we just pass drop_last=True
when building our DataLoaders
to drop the last batch that does not have a shape of bs
. We also pass shuffle=False
to make sure the texts are read in order:
In [ ]:
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(
group_chunks(seqs[:cut], bs),
group_chunks(seqs[cut:], bs),
bs=bs, drop_last=True, shuffle=False)
The last thing we add is a little tweak of the training loop via a Callback
. We will talk more about callbacks in <>; this one will call the reset
method of our model at the beginning of each epoch and before each validation phase. Since we implemented that method to zero the hidden state of the model, this will make sure we start with a clean state before reading those continuous chunks of text. We can also start training a bit longer:
In [ ]:
learn = Learner(dls, LMModel3(len(vocab), 64), loss_func=F.cross_entropy,
metrics=accuracy, cbs=ModelResetter)
learn.fit_one_cycle(10, 3e-3)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 1.677074 | 1.827367 | 0.467548 | 00:02 |
1 | 1.282722 | 1.870913 | 0.388942 | 00:02 |
2 | 1.090705 | 1.651793 | 0.462500 | 00:02 |
3 | 1.005092 | 1.613794 | 0.516587 | 00:02 |
4 | 0.965975 | 1.560775 | 0.551202 | 00:02 |
5 | 0.916182 | 1.595857 | 0.560577 | 00:02 |
6 | 0.897657 | 1.539733 | 0.574279 | 00:02 |
7 | 0.836274 | 1.585141 | 0.583173 | 00:02 |
8 | 0.805877 | 1.629808 | 0.586779 | 00:02 |
9 | 0.795096 | 1.651267 | 0.588942 | 00:02 |
This is already better! The next step is to use more targets and compare them to the intermediate predictions.
Creating More Signal
Another problem with our current approach is that we only predict one output word for each three input words. That means that the amount of signal that we are feeding back to update weights with is not as large as it could be. It would be better if we predicted the next word after every single word, rather than every three words, as shown in <>.
This is easy enough to add. We need to first change our data so that the dependent variable has each of the three next words after each of our three input words. Instead of 3
, we use an attribute, sl
(for sequence length), and make it a bit bigger:
In [ ]:
sl = 16
seqs = L((tensor(nums[i:i+sl]), tensor(nums[i+1:i+sl+1]))
for i in range(0,len(nums)-sl-1,sl))
cut = int(len(seqs) * 0.8)
dls = DataLoaders.from_dsets(group_chunks(seqs[:cut], bs),
group_chunks(seqs[cut:], bs),
bs=bs, drop_last=True, shuffle=False)
Looking at the first element of seqs
, we can see that it contains two lists of the same size. The second list is the same as the first, but offset by one element:
In [ ]:
[L(vocab[o] for o in s) for s in seqs[0]]
Out[ ]:
[(#16) ['one','.','two','.','three','.','four','.','five','.'...],
(#16) ['.','two','.','three','.','four','.','five','.','six'...]]
Now we need to modify our model so that it outputs a prediction after every word, rather than just at the end of a three-word sequence:
In [ ]:
class LMModel4(Module):
def __init__(self, vocab_sz, n_hidden):
self.i_h = nn.Embedding(vocab_sz, n_hidden)
self.h_h = nn.Linear(n_hidden, n_hidden)
self.h_o = nn.Linear(n_hidden,vocab_sz)
self.h = 0
def forward(self, x):
outs = []
for i in range(sl):
self.h = self.h + self.i_h(x[:,i])
self.h = F.relu(self.h_h(self.h))
outs.append(self.h_o(self.h))
self.h = self.h.detach()
return torch.stack(outs, dim=1)
def reset(self): self.h = 0
This model will return outputs of shape bs x sl x vocab_sz
(since we stacked on dim=1
). Our targets are of shape bs x sl
, so we need to flatten those before using them in F.cross_entropy
:
In [ ]:
def loss_func(inp, targ):
return F.cross_entropy(inp.view(-1, len(vocab)), targ.view(-1))
We can now use this loss function to train the model:
In [ ]:
learn = Learner(dls, LMModel4(len(vocab), 64), loss_func=loss_func,
metrics=accuracy, cbs=ModelResetter)
learn.fit_one_cycle(15, 3e-3)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 3.103298 | 2.874341 | 0.212565 | 00:01 |
1 | 2.231964 | 1.971280 | 0.462158 | 00:01 |
2 | 1.711358 | 1.813547 | 0.461182 | 00:01 |
3 | 1.448516 | 1.828176 | 0.483236 | 00:01 |
4 | 1.288630 | 1.659564 | 0.520671 | 00:01 |
5 | 1.161470 | 1.714023 | 0.554932 | 00:01 |
6 | 1.055568 | 1.660916 | 0.575033 | 00:01 |
7 | 0.960765 | 1.719624 | 0.591064 | 00:01 |
8 | 0.870153 | 1.839560 | 0.614665 | 00:01 |
9 | 0.808545 | 1.770278 | 0.624349 | 00:01 |
10 | 0.758084 | 1.842931 | 0.610758 | 00:01 |
11 | 0.719320 | 1.799527 | 0.646566 | 00:01 |
12 | 0.683439 | 1.917928 | 0.649821 | 00:01 |
13 | 0.660283 | 1.874712 | 0.628581 | 00:01 |
14 | 0.646154 | 1.877519 | 0.640055 | 00:01 |
We need to train for longer, since the task has changed a bit and is more complicated now. But we end up with a good result… At least, sometimes. If you run it a few times, you’ll see that you can get quite different results on different runs. That’s because effectively we have a very deep network here, which can result in very large or very small gradients. We’ll see in the next part of this chapter how to deal with this.
Now, the obvious way to get a better model is to go deeper: we only have one linear layer between the hidden state and the output activations in our basic RNN, so maybe we’ll get better results with more.