Momentum
As described in <>, SGD can be thought of as standing at the top of a mountain and working your way down by taking a step in the direction of the steepest slope at each point in time. But what if we have a ball rolling down the mountain? It won’t, at each given point, exactly follow the direction of the gradient, as it will have momentum. A ball with more momentum (for instance, a heavier ball) will skip over little bumps and holes, and be more likely to get to the bottom of a bumpy mountain. A ping pong ball, on the other hand, will get stuck in every little crevice.
So how can we bring this idea over to SGD? We can use a moving average, instead of only the current gradient, to make our step:
weight.avg = beta * weight.avg + (1-beta) * weight.grad
new_weight = weight - lr * weight.avg
Here beta
is some number we choose which defines how much momentum to use. If beta
is 0, then the first equation becomes weight.avg = weight.grad
, so we end up with plain SGD. But if it’s a number close to 1, then the main direction chosen is an average of the previous steps. (If you have done a bit of statistics, you may recognize in the first equation an exponentially weighted moving average, which is very often used to denoise data and get the underlying tendency.)
Note that we are writing weight.avg
to highlight the fact that we need to store the moving averages for each parameter of the model (they all have their own independent moving averages).
<> shows an example of noisy data for a single parameter, with the momentum curve plotted in red, and the gradients of the parameter plotted in blue. The gradients increase, then decrease, and the momentum does a good job of following the general trend without getting too influenced by noise.
In [ ]:
#hide_input
#id img_momentum
#caption An example of momentum
#alt Graph showing an example of momentum
x = np.linspace(-4, 4, 100)
y = 1 - (x/3) ** 2
x1 = x + np.random.randn(100) * 0.1
y1 = y + np.random.randn(100) * 0.1
plt.scatter(x1,y1)
idx = x1.argsort()
beta,avg,res = 0.7,0,[]
for i in idx:
avg = beta * avg + (1-beta) * y1[i]
res.append(avg/(1-beta**(i+1)))
plt.plot(x1[idx],np.array(res), color='red');
It works particularly well if the loss function has narrow canyons we need to navigate: vanilla SGD would send us bouncing from one side to the other, while SGD with momentum will average those to roll smoothly down the side. The parameter beta
determines the strength of the momentum we are using: with a small beta
we stay closer to the actual gradient values, whereas with a high beta
we will mostly go in the direction of the average of the gradients and it will take a while before any change in the gradients makes that trend move.
With a large beta
, we might miss that the gradients have changed directions and roll over a small local minima. This is a desired side effect: intuitively, when we show a new input to our model, it will look like something in the training set but won’t be exactly like it. That means it will correspond to a point in the loss function that is close to the minimum we ended up with at the end of training, but not exactly at that minimum. So, we would rather end up training in a wide minimum, where nearby points have approximately the same loss (or if you prefer, a point where the loss is as flat as possible). <> shows how the chart in <> varies as we change beta
.
In [ ]:
#hide_input
#id img_betas
#caption Momentum with different beta values
#alt Graph showing how the beta value influences momentum
x = np.linspace(-4, 4, 100)
y = 1 - (x/3) ** 2
x1 = x + np.random.randn(100) * 0.1
y1 = y + np.random.randn(100) * 0.1
_,axs = plt.subplots(2,2, figsize=(12,8))
betas = [0.5,0.7,0.9,0.99]
idx = x1.argsort()
for beta,ax in zip(betas, axs.flatten()):
ax.scatter(x1,y1)
avg,res = 0,[]
for i in idx:
avg = beta * avg + (1-beta) * y1[i]
res.append(avg)#/(1-beta**(i+1)))
ax.plot(x1[idx],np.array(res), color='red');
ax.set_title(f'beta={beta}')
We can see in these examples that a beta
that’s too high results in the overall changes in gradient getting ignored. In SGD with momentum, a value of beta
that is often used is 0.9.
fit_one_cycle
by default starts with a beta
of 0.95, gradually adjusts it to 0.85, and then gradually moves it back to 0.95 at the end of training. Let’s see how our training goes with momentum added to plain SGD.
In order to add momentum to our optimizer, we’ll first need to keep track of the moving average gradient, which we can do with another callback. When an optimizer callback returns a dict
, it is used to update the state of the optimizer and is passed back to the optimizer on the next step. So this callback will keep track of the gradient averages in a parameter called grad_avg
:
In [ ]:
def average_grad(p, mom, grad_avg=None, **kwargs):
if grad_avg is None: grad_avg = torch.zeros_like(p.grad.data)
return {'grad_avg': grad_avg*mom + p.grad.data}
To use it, we just have to replace p.grad.data
with grad_avg
in our step function:
In [ ]:
def momentum_step(p, lr, grad_avg, **kwargs): p.data.add_(-lr, grad_avg)
In [ ]:
opt_func = partial(Optimizer, cbs=[average_grad,momentum_step], mom=0.9)
Learner
will automatically schedule mom
and lr
, so fit_one_cycle
will even work with our custom Optimizer
:
In [ ]:
learn = get_learner(opt_func=opt_func)
learn.fit_one_cycle(3, 0.03)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 2.856000 | 2.493429 | 0.246115 | 00:10 |
1 | 2.504205 | 2.463813 | 0.348280 | 00:10 |
2 | 2.187387 | 1.755670 | 0.418853 | 00:10 |
In [ ]:
learn.recorder.plot_sched()
We’re still not getting great results, so let’s see what else we can do.