Collaborative Filtering from Scratch
Before we can write a model in PyTorch, we first need to learn the basics of object-oriented programming and Python. If you haven’t done any object-oriented programming before, we will give you a quick introduction here, but we would recommend looking up a tutorial and getting some practice before moving on.
The key idea in object-oriented programming is the class. We have been using classes throughout this book, such as DataLoader
, string
, and Learner
. Python also makes it easy for us to create new classes. Here is an example of a simple class:
In [ ]:
class Example:
def __init__(self, a): self.a = a
def say(self,x): return f'Hello {self.a}, {x}.'
The most important piece of this is the special method called __init__
(pronounced dunder init). In Python, any method surrounded in double underscores like this is considered special. It indicates that there is some extra behavior associated with this method name. In the case of __init__
, this is the method Python will call when your new object is created. So, this is where you can set up any state that needs to be initialized upon object creation. Any parameters included when the user constructs an instance of your class will be passed to the __init__
method as parameters. Note that the first parameter to any method defined inside a class is self
, so you can use this to set and get any attributes that you will need:
In [ ]:
ex = Example('Sylvain')
ex.say('nice to meet you')
Out[ ]:
'Hello Sylvain, nice to meet you.'
Also note that creating a new PyTorch module requires inheriting from Module
. Inheritance is an important object-oriented concept that we will not discuss in detail here—in short, it means that we can add additional behavior to an existing class. PyTorch already provides a Module
class, which provides some basic foundations that we want to build on. So, we add the name of this superclass after the name of the class that we are defining, as shown in the following example.
The final thing that you need to know to create a new PyTorch module is that when your module is called, PyTorch will call a method in your class called forward
, and will pass along to that any parameters that are included in the call. Here is the class defining our dot product model:
In [ ]:
class DotProduct(Module):
def __init__(self, n_users, n_movies, n_factors):
self.user_factors = Embedding(n_users, n_factors)
self.movie_factors = Embedding(n_movies, n_factors)
def forward(self, x):
users = self.user_factors(x[:,0])
movies = self.movie_factors(x[:,1])
return (users * movies).sum(dim=1)
If you haven’t seen object-oriented programming before, then don’t worry, you won’t need to use it much in this book. We are just mentioning this approach here, because most online tutorials and documentation will use the object-oriented syntax.
Note that the input of the model is a tensor of shape batch_size x 2
, where the first column (x[:, 0]
) contains the user IDs and the second column (x[:, 1]
) contains the movie IDs. As explained before, we use the embedding layers to represent our matrices of user and movie latent factors:
In [ ]:
x,y = dls.one_batch()
x.shape
Out[ ]:
torch.Size([64, 2])
Now that we have defined our architecture, and created our parameter matrices, we need to create a Learner
to optimize our model. In the past we have used special functions, such as cnn_learner
, which set up everything for us for a particular application. Since we are doing things from scratch here, we will use the plain Learner
class:
In [ ]:
model = DotProduct(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
We are now ready to fit our model:
In [ ]:
learn.fit_one_cycle(5, 5e-3)
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.993168 | 0.990168 | 00:12 |
1 | 0.884821 | 0.911269 | 00:12 |
2 | 0.671865 | 0.875679 | 00:12 |
3 | 0.471727 | 0.878200 | 00:11 |
4 | 0.361314 | 0.884209 | 00:12 |
The first thing we can do to make this model a little bit better is to force those predictions to be between 0 and 5. For this, we just need to use sigmoid_range
, like in <>. One thing we discovered empirically is that it’s better to have the range go a little bit over 5, so we use (0, 5.5)
:
In [ ]:
class DotProduct(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = Embedding(n_users, n_factors)
self.movie_factors = Embedding(n_movies, n_factors)
self.y_range = y_range
def forward(self, x):
users = self.user_factors(x[:,0])
movies = self.movie_factors(x[:,1])
return sigmoid_range((users * movies).sum(dim=1), *self.y_range)
In [ ]:
model = DotProduct(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3)
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.973745 | 0.993206 | 00:12 |
1 | 0.869132 | 0.914323 | 00:12 |
2 | 0.676553 | 0.870192 | 00:12 |
3 | 0.485377 | 0.873865 | 00:12 |
4 | 0.377866 | 0.877610 | 00:11 |
This is a reasonable start, but we can do better. One obvious missing piece is that some users are just more positive or negative in their recommendations than others, and some movies are just plain better or worse than others. But in our dot product representation we do not have any way to encode either of these things. If all you can say about a movie is, for instance, that it is very sci-fi, very action-oriented, and very not old, then you don’t really have any way to say whether most people like it.
That’s because at this point we only have weights; we do not have biases. If we have a single number for each user that we can add to our scores, and ditto for each movie, that will handle this missing piece very nicely. So first of all, let’s adjust our model architecture:
In [ ]:
class DotProductBias(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = Embedding(n_users, n_factors)
self.user_bias = Embedding(n_users, 1)
self.movie_factors = Embedding(n_movies, n_factors)
self.movie_bias = Embedding(n_movies, 1)
self.y_range = y_range
def forward(self, x):
users = self.user_factors(x[:,0])
movies = self.movie_factors(x[:,1])
res = (users * movies).sum(dim=1, keepdim=True)
res += self.user_bias(x[:,0]) + self.movie_bias(x[:,1])
return sigmoid_range(res, *self.y_range)
Let’s try training this and see how it goes:
In [ ]:
model = DotProductBias(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3)
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.929161 | 0.936303 | 00:13 |
1 | 0.820444 | 0.861306 | 00:13 |
2 | 0.621612 | 0.865306 | 00:14 |
3 | 0.404648 | 0.886448 | 00:13 |
4 | 0.292948 | 0.892580 | 00:13 |
Instead of being better, it ends up being worse (at least at the end of training). Why is that? If we look at both trainings carefully, we can see the validation loss stopped improving in the middle and started to get worse. As we’ve seen, this is a clear indication of overfitting. In this case, there is no way to use data augmentation, so we will have to use another regularization technique. One approach that can be helpful is weight decay.
Weight Decay
Weight decay, or L2 regularization, consists in adding to your loss function the sum of all the weights squared. Why do that? Because when we compute the gradients, it will add a contribution to them that will encourage the weights to be as small as possible.
Why would it prevent overfitting? The idea is that the larger the coefficients are, the sharper canyons we will have in the loss function. If we take the basic example of a parabola, y = a * (x**2)
, the larger a
is, the more narrow the parabola is (<>).
In [ ]:
#hide_input
#id parabolas
x = np.linspace(-2,2,100)
a_s = [1,2,5,10,50]
ys = [a * x**2 for a in a_s]
_,ax = plt.subplots(figsize=(8,6))
for a,y in zip(a_s,ys): ax.plot(x,y, label=f'a={a}')
ax.set_ylim([0,5])
ax.legend();
So, letting our model learn high parameters might cause it to fit all the data points in the training set with an overcomplex function that has very sharp changes, which will lead to overfitting.
Limiting our weights from growing too much is going to hinder the training of the model, but it will yield a state where it generalizes better. Going back to the theory briefly, weight decay (or just wd
) is a parameter that controls that sum of squares we add to our loss (assuming parameters
is a tensor of all parameters):
loss_with_wd = loss + wd * (parameters**2).sum()
In practice, though, it would be very inefficient (and maybe numerically unstable) to compute that big sum and add it to the loss. If you remember a little bit of high school math, you might recall that the derivative of p**2
with respect to p
is 2*p
, so adding that big sum to our loss is exactly the same as doing:
parameters.grad += wd * 2 * parameters
In practice, since wd
is a parameter that we choose, we can just make it twice as big, so we don’t even need the *2
in this equation. To use weight decay in fastai, just pass wd
in your call to fit
or fit_one_cycle
:
In [ ]:
model = DotProductBias(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3, wd=0.1)
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.972090 | 0.962366 | 00:13 |
1 | 0.875591 | 0.885106 | 00:13 |
2 | 0.723798 | 0.839880 | 00:13 |
3 | 0.586002 | 0.823225 | 00:13 |
4 | 0.490980 | 0.823060 | 00:13 |
Much better!
Creating Our Own Embedding Module
So far, we’ve used Embedding
without thinking about how it really works. Let’s re-create DotProductBias
without using this class. We’ll need a randomly initialized weight matrix for each of the embeddings. We have to be careful, however. Recall from <> that optimizers require that they can get all the parameters of a module from the module’s parameters
method. However, this does not happen fully automatically. If we just add a tensor as an attribute to a Module
, it will not be included in parameters
:
In [ ]:
class T(Module):
def __init__(self): self.a = torch.ones(3)
L(T().parameters())
Out[ ]:
(#0) []
To tell Module
that we want to treat a tensor as a parameter, we have to wrap it in the nn.Parameter
class. This class doesn’t actually add any functionality (other than automatically calling requires_grad_
for us). It’s only used as a “marker” to show what to include in parameters
:
In [ ]:
class T(Module):
def __init__(self): self.a = nn.Parameter(torch.ones(3))
L(T().parameters())
Out[ ]:
(#1) [Parameter containing:
tensor([1., 1., 1.], requires_grad=True)]
All PyTorch modules use nn.Parameter
for any trainable parameters, which is why we haven’t needed to explicitly use this wrapper up until now:
In [ ]:
class T(Module):
def __init__(self): self.a = nn.Linear(1, 3, bias=False)
t = T()
L(t.parameters())
Out[ ]:
(#1) [Parameter containing:
tensor([[-0.9595],
[-0.8490],
[ 0.8159]], requires_grad=True)]
In [ ]:
type(t.a.weight)
Out[ ]:
torch.nn.parameter.Parameter
We can create a tensor as a parameter, with random initialization, like so:
In [ ]:
def create_params(size):
return nn.Parameter(torch.zeros(*size).normal_(0, 0.01))
Let’s use this to create DotProductBias
again, but without Embedding
:
In [ ]:
class DotProductBias(Module):
def __init__(self, n_users, n_movies, n_factors, y_range=(0,5.5)):
self.user_factors = create_params([n_users, n_factors])
self.user_bias = create_params([n_users])
self.movie_factors = create_params([n_movies, n_factors])
self.movie_bias = create_params([n_movies])
self.y_range = y_range
def forward(self, x):
users = self.user_factors[x[:,0]]
movies = self.movie_factors[x[:,1]]
res = (users*movies).sum(dim=1)
res += self.user_bias[x[:,0]] + self.movie_bias[x[:,1]]
return sigmoid_range(res, *self.y_range)
Then let’s train it again to check we get around the same results we saw in the previous section:
In [ ]:
model = DotProductBias(n_users, n_movies, 50)
learn = Learner(dls, model, loss_func=MSELossFlat())
learn.fit_one_cycle(5, 5e-3, wd=0.1)
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.962146 | 0.936952 | 00:14 |
1 | 0.858084 | 0.884951 | 00:14 |
2 | 0.740883 | 0.838549 | 00:14 |
3 | 0.592497 | 0.823599 | 00:14 |
4 | 0.473570 | 0.824263 | 00:14 |
Now, let’s take a look at what our model has learned.