LRScheduler

class paddle.callbacks. LRScheduler ( by_step=True, by_epoch=False ) [源代码]

LRScheduler 是一个学习率回调函数。

参数:

  • by_step (bool,可选) - 是否每个step都更新学习率。默认值:True。

  • by_epoch (bool,可选) - 是否每个epoch都更新学习率。默认值:False。

代码示例

  1. import paddle
  2. import paddle.vision.transforms as T
  3. from paddle.static import InputSpec
  4. inputs = [InputSpec([-1, 1, 28, 28], 'float32', 'image')]
  5. labels = [InputSpec([None, 1], 'int64', 'label')]
  6. transform = T.Compose([
  7. T.Transpose(),
  8. T.Normalize([127.5], [127.5])
  9. ])
  10. train_dataset = paddle.vision.datasets.MNIST(mode='train', transform=transform)
  11. lenet = paddle.vision.LeNet()
  12. model = paddle.Model(lenet,
  13. inputs, labels)
  14. base_lr = 1e-3
  15. boundaries = [5, 8]
  16. wamup_steps = 4
  17. def make_optimizer(parameters=None):
  18. momentum = 0.9
  19. weight_decay = 5e-4
  20. values = [base_lr * (0.1**i) for i in range(len(boundaries) + 1)]
  21. learning_rate = paddle.optimizer.lr.PiecewiseDecay(
  22. boundaries=boundaries, values=values)
  23. learning_rate = paddle.optimizer.lr.LinearWarmup(
  24. learning_rate=learning_rate,
  25. warmup_steps=wamup_steps,
  26. start_lr=base_lr / 5.,
  27. end_lr=base_lr,
  28. verbose=True)
  29. optimizer = paddle.optimizer.Momentum(
  30. learning_rate=learning_rate,
  31. weight_decay=weight_decay,
  32. momentum=momentum,
  33. parameters=parameters)
  34. return optimizer
  35. optim = make_optimizer(parameters=lenet.parameters())
  36. model.prepare(optimizer=optim,
  37. loss=paddle.nn.CrossEntropyLoss(),
  38. metrics=paddle.metric.Accuracy())
  39. # if LRScheduler callback not set, an instance LRScheduler update by step
  40. # will be created auto.
  41. model.fit(train_dataset, batch_size=64)
  42. # create a learning rate scheduler update by epoch
  43. callback = paddle.callbacks.LRScheduler(by_step=False, by_epoch=True)
  44. model.fit(train_dataset, batch_size=64, callbacks=callback)