Questionnaire
- If the dataset for your project is so big and complicated that working with it takes a significant amount of time, what should you do?
- Why do we concatenate the documents in our dataset before creating a language model?
- To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make to our model?
- How can we share a weight matrix across multiple layers in PyTorch?
- Write a module that predicts the third word given the previous two words of a sentence, without peeking.
- What is a recurrent neural network?
- What is “hidden state”?
- What is the equivalent of hidden state in
LMModel1
? - To maintain the state in an RNN, why is it important to pass the text to the model in order?
- What is an “unrolled” representation of an RNN?
- Why can maintaining the hidden state in an RNN lead to memory and performance problems? How do we fix this problem?
- What is “BPTT”?
- Write code to print out the first few batches of the validation set, including converting the token IDs back into English strings, as we showed for batches of IMDb data in <>.
- What does the
ModelResetter
callback do? Why do we need it? - What are the downsides of predicting just one output word for each three input words?
- Why do we need a custom loss function for
LMModel4
? - Why is the training of
LMModel4
unstable? - In the unrolled representation, we can see that a recurrent neural network actually has many layers. So why do we need to stack RNNs to get better results?
- Draw a representation of a stacked (multilayer) RNN.
- Why should we get better results in an RNN if we call
detach
less often? Why might this not happen in practice with a simple RNN? - Why can a deep network result in very large or very small activations? Why does this matter?
- In a computer’s floating-point representation of numbers, which numbers are the most precise?
- Why do vanishing gradients prevent training?
- Why does it help to have two hidden states in the LSTM architecture? What is the purpose of each one?
- What are these two states called in an LSTM?
- What is tanh, and how is it related to sigmoid?
- What is the purpose of this code in
LSTMCell
:h = torch.cat([h, input], dim=1)
- What does
chunk
do in PyTorch? - Study the refactored version of
LSTMCell
carefully to ensure you understand how and why it does the same thing as the non-refactored version. - Why can we use a higher learning rate for
LMModel6
? - What are the three regularization techniques used in an AWD-LSTM model?
- What is “dropout”?
- Why do we scale the acitvations with dropout? Is this applied during training, inference, or both?
- What is the purpose of this line from
Dropout
:if not self.training: return x
- Experiment with
bernoulli_
to understand how it works. - How do you set your model in training mode in PyTorch? In evaluation mode?
- Write the equation for activation regularization (in math or code, as you prefer). How is it different from weight decay?
- Write the equation for temporal activation regularization (in math or code, as you prefer). Why wouldn’t we use this for computer vision problems?
- What is “weight tying” in a language model?
Further Research
- In
LMModel2
, why canforward
start withh=0
? Why don’t we need to sayh=torch.zeros(...)
? - Write the code for an LSTM from scratch (you may refer to <>).
- Search the internet for the GRU architecture and implement it from scratch, and try training a model. See if you can get results similar to those we saw in this chapter. Compare your results to the results of PyTorch’s built in
GRU
module. - Take a look at the source code for AWD-LSTM in fastai, and try to map each of the lines of code to the concepts shown in this chapter.
In [ ]: